5.00 score from hupso.pl for:
dbatoolz.com



HTML Content


Titledbatoolz | oracle to amazon rds

Length: 31, Words: 5
Description oracle to amazon rds

Length: 20, Words: 4
Keywords pusty
Robots noodp
Charset UTF-8
Og Meta - Title exist
Og Meta - Description exist
Og Meta - Site name exist
Tytuł powinien zawierać pomiędzy 10 a 70 znaków (ze spacjami), a mniej niż 12 słów w długości.
Meta opis powinien zawierać pomiędzy 50 a 160 znaków (łącznie ze spacjami), a mniej niż 24 słów w długości.
Kodowanie znaków powinny być określone , UTF-8 jest chyba najlepszy zestaw znaków, aby przejść z powodu UTF-8 jest bardziej międzynarodowy kodowaniem.
Otwarte obiekty wykresu powinny być obecne w stronie internetowej (więcej informacji na temat protokołu OpenGraph: http://ogp.me/)

SEO Content

Words/Characters 12949
Text/HTML 66.99 %
Headings H1 0
H2 39
H3 7
H4 5
H5 0
H6 0
H1
H2
big data migration strategy
replication
bulk copy
downtime
merge
go live
conclusion
breaking oracle: beyond 429496729 rows
it’s not what you did, son!
what’s the plan?
instrument or else!
i like to copy/paste
keep your hands off!
needle in a haystack!
do you have the brakes?
are we there yet?
do you know what you are doing?
the moment of truth
summary
tkprof: when and how to use it
when not to use tkprof
what tkprof is good for
identify sid/serial
enable sql trace
tkprof the trace file
sqlplus -s shell scripting techniques
sqlplus user/pass@tns_alias
sqlplus username@tns_alias
sqlplus /nolog
sqlplus / as sysdba
sqlplus -s
sqlplus wrapper script
oracle asm diagnostics script
oracle tablespace monitoring
v$lock: oracle rac gv$lock script
oracle ash monitoring
oracle ash top waits
oracle webiv
text based notes
H3
consistent data set
low impact on source
transform only on target
one hop data transfer
isolate bulk copy and replication
community
categories
H4 use text based notes
pick one place to store all your notes
create a centralized searchable index for all projects
rotate long term project notes on daily basis
takeaways
H5
H6
strong
big data
migration
big data
r
c
d
m
l
rcdml
rcdml
replication
bulk copy
replication
staging database
bulk copy
target database
downtime
bulk copy
merge
merge
merge
go live
rcdml
big data
bulk copy
bulk copy
merge
replication
replication
data service
high availability
dispatch
off
on
on
merge
bulk copy
bulk copy
big data
database a
database b
on
prod
target
simple
invaluable
the second interesting point
bulk copy
unload process
reload process
big data
big data
data pump
t0
t1
t1
historical
online
historical
online
option-a
enable sql trace
option-b
tip
4089
option-c
sql trace
notes
32262
note
185.22
note
note
turn off
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
the example below works on 11g and above
sqlplus
sqlplus
sqlplus scripting
sqlplus
sskgxpt
 script
long running sql
wrapper shell script
sql script
sqlplus
sqlplus
shell scripting
oracle lock monitoring
oracle event monitoring framework
sql*net message from dblink
webiv
webiv
back/forward
copy/paste
internal use only
problem solving
prevention
not
awareness
profiling
better profiling
better searching
automated daily status reports for my clients
b
i
big data
migration
big data
r
c
d
m
l
rcdml
rcdml
replication
bulk copy
replication
staging database
bulk copy
target database
downtime
bulk copy
merge
merge
merge
go live
rcdml
big data
bulk copy
bulk copy
merge
replication
replication
data service
high availability
dispatch
off
on
on
merge
bulk copy
bulk copy
big data
database a
database b
on
prod
target
simple
invaluable
the second interesting point
bulk copy
unload process
reload process
big data
big data
data pump
t0
t1
t1
historical
online
historical
online
option-a
enable sql trace
option-b
tip
4089
option-c
sql trace
notes
32262
note
185.22
note
note
turn off
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
the example below works on 11g and above
sqlplus
sqlplus
sqlplus scripting
sqlplus
sskgxpt
 script
long running sql
wrapper shell script
sql script
sqlplus
sqlplus
shell scripting
oracle lock monitoring
oracle event monitoring framework
sql*net message from dblink
webiv
webiv
back/forward
copy/paste
internal use only
problem solving
prevention
not
awareness
profiling
better profiling
better searching
automated daily status reports for my clients
em big data
migration
big data
r
c
d
m
l
rcdml
rcdml
replication
bulk copy
replication
staging database
bulk copy
target database
downtime
bulk copy
merge
merge
merge
go live
rcdml
big data
bulk copy
bulk copy
merge
replication
replication
data service
high availability
dispatch
off
on
on
merge
bulk copy
bulk copy
big data
database a
database b
on
prod
target
simple
invaluable
the second interesting point
bulk copy
unload process
reload process
big data
big data
data pump
t0
t1
t1
historical
online
historical
online
option-a
enable sql trace
option-b
tip
4089
option-c
sql trace
notes
32262
note
185.22
note
note
turn off
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
sqlplus
the example below works on 11g and above
sqlplus
sqlplus
sqlplus scripting
sqlplus
sskgxpt
 script
long running sql
wrapper shell script
sql script
sqlplus
sqlplus
shell scripting
oracle lock monitoring
oracle event monitoring framework
sql*net message from dblink
webiv
webiv
back/forward
copy/paste
internal use only
problem solving
prevention
not
awareness
profiling
better profiling
better searching
automated daily status reports for my clients
Bolds strong 110
b 0
i 110
em 110
Zawartość strony internetowej powinno zawierać więcej niż 250 słów, z stopa tekst / kod jest wyższy niż 20%.
Pozycji używać znaczników (h1, h2, h3, ...), aby określić temat sekcji lub ustępów na stronie, ale zwykle, użyj mniej niż 6 dla każdego tagu pozycje zachować swoją stronę zwięzły.
Styl używać silnych i kursywy znaczniki podkreślić swoje słowa kluczowe swojej stronie, ale nie nadużywać (mniej niż 16 silnych tagi i 16 znaczników kursywy)

Statystyki strony

twitter:title exist
twitter:description exist
google+ itemprop=name pusty
Pliki zewnętrzne 13
Pliki CSS 8
Pliki javascript 5
Plik należy zmniejszyć całkowite odwołanie plików (CSS + JavaScript) do 7-8 maksymalnie.

Linki wewnętrzne i zewnętrzne

Linki 180
Linki wewnętrzne 26
Linki zewnętrzne 154
Linki bez atrybutu Title 163
Linki z atrybutem NOFOLLOW 0
Linki - Użyj atrybutu tytuł dla każdego łącza. Nofollow link jest link, który nie pozwala wyszukiwarkom boty zrealizują są odnośniki no follow. Należy zwracać uwagę na ich użytkowania

Linki wewnętrzne

replication #replicate
bulk copy #bulk-copy
downtime #downtime
merge #merge
go live #go-live
replication #replicate
merge #merge
rcdml #rcdml
merge #merge
replication #replicate
merge #merge
bulk copy #bulk-copy
replication #replicate
replication options chosen earlier #replicate
downtime #downtime
rcdml #rcdml
replication #replicate
bulk copy #bulk-copy
downtime #downtime
go live #go-live

Linki zewnętrzne

dbatoolz http://www.dbatoolz.com/
subscribe http://www.dbatoolz.com/subscribe
oracle to amazon rds http://www.dbatoolz.com/oracle-dba-los-angeles
big data migration strategy http://www.dbatoolz.com/t/big-data-migration-strategy.html
twitter http://nosql.mypopescu.com/post/407159447/cassandra-twitter-an-interview-with-ryan-king
facebook https://www.facebook.com/notes/facebook-engineering/moving-an-elephant-large-scale-hadoop-data-migration-at-facebook/10150246275318920/
netflix http://techblog.netflix.com/2013/02/netflix-queue-data-migration-for-high.html
eharmony http://www.dbatoolz.com/t/breaking-oracle-beyond-429496729-rows.html
san based shared storage https://en.wikipedia.org/wiki/storage_area_network
etl development team https://en.wikipedia.org/wiki/extract,_transform,_load
10 gigabit ethernet https://en.wikipedia.org/wiki/10_gigabit_ethernet
subscribe http://eepurl.com/bswgxb
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
big_data http://www.dbatoolz.com/t/tag/big_data
bulk_copy http://www.dbatoolz.com/t/tag/bulk_copy
replication http://www.dbatoolz.com/t/tag/replication
breaking oracle: beyond 429496729 rows http://www.dbatoolz.com/t/breaking-oracle-beyond-429496729-rows.html
microservices http://martinfowler.com/articles/microservices.html
from mysql to cassandra http://nosql.mypopescu.com/post/407159447/cassandra-twitter-an-interview-with-ryan-king
user’s movie queue http://techblog.netflix.com/2013/02/netflix-queue-data-migration-for-high.html
viggo tarasov http://www.imdb.com/character/ch0476685/
lambda architecture http://lambda-architecture.net/
subscribe http://eepurl.com/bswgxb
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
429496729 http://www.dbatoolz.com/t/tag/429496729
big_data http://www.dbatoolz.com/t/tag/big_data
data_pump http://www.dbatoolz.com/t/tag/data_pump
tkprof: when and how to use it http://www.dbatoolz.com/t/tkprof-when-and-how-to-use-it.html
lab environment https://oracle-base.com/articles/8i/tkprof-and-oracle-trace
ash monitoring http://www.dbatoolz.com/t/oracle_ash_monitoring.html
subscribe http://eepurl.com/bswgxb
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
scripts http://www.dbatoolz.com/c/scripts
spid http://www.dbatoolz.com/t/tag/spid
sql trace http://www.dbatoolz.com/t/tag/sql-trace
tkprof http://www.dbatoolz.com/t/tag/tkprof
topas http://www.dbatoolz.com/t/tag/topas
udump http://www.dbatoolz.com/t/tag/udump
sqlplus -s shell scripting techniques http://www.dbatoolz.com/t/sqlplus-s-shell-scripting.html
lasmdsk.sh https://s3.amazonaws.com/mve-shared/bin/lasmdsk.sh
oracle asm diagnostics script http://www.dbatoolz.com/t/oracle-asm-diagnostics-script.html
subscribe http://eepurl.com/bswgxb
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
scripts http://www.dbatoolz.com/c/scripts
oracle asm diagnostics script http://www.dbatoolz.com/t/oracle-asm-diagnostics-script.html
oracle tablespace monitoring http://www.dbatoolz.com/t/oracle-tablespace-monitoring.html
lasmdsk.sh https://s3.amazonaws.com/mve-shared/bin/lasmdsk.sh
- https://s3.amazonaws.com/mve-shared/oracle_asm/fullres/asm1.png
- https://s3.amazonaws.com/mve-shared/oracle_asm/fullres/asm2.png
- https://s3.amazonaws.com/mve-shared/oracle_asm/fullres/asm4.png
mntgrp.sh https://s3.amazonaws.com/mve-shared/bin/mntgrp.sh
- https://s3.amazonaws.com/mve-shared/oracle_asm/fullres/asm_issue_example.png
oracle dba community http://kb.dbatoolz.com/gp/418.html
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
scripts http://www.dbatoolz.com/c/scripts
asm http://www.dbatoolz.com/t/tag/asm
gv$asm_disk http://www.dbatoolz.com/t/tag/gvasm_disk
gv$asm_diskgroup http://www.dbatoolz.com/t/tag/gvasm_diskgroup
lun http://www.dbatoolz.com/t/tag/lun
oracle_sid http://www.dbatoolz.com/t/tag/oracle_sid
oratab http://www.dbatoolz.com/t/tag/oratab
san http://www.dbatoolz.com/t/tag/san
oracle tablespace monitoring http://www.dbatoolz.com/t/oracle-tablespace-monitoring.html
segs2: fast extending segments in the last x minutes for given ts https://s3.amazonaws.com/mve-shared/segs2.sql
segs3: fast extending segments since last datafile creation for given ts https://s3.amazonaws.com/mve-shared/segs3.sql
oracle monitoring framework http://www.dbatoolz.com/event-monitoring-system
eventorex mailing list http://eepurl.com/biebuh
eventorex mailing list http://eepurl.com/biebuh
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
scripts http://www.dbatoolz.com/c/scripts
awr http://www.dbatoolz.com/t/tag/awr
dba_hist_seg_stat http://www.dbatoolz.com/t/tag/dba_hist_seg_stat
dba_hist_seg_stat_obj http://www.dbatoolz.com/t/tag/dba_hist_seg_stat_obj
tablespace http://www.dbatoolz.com/t/tag/tablespace
v$lock: oracle rac gv$lock script http://www.dbatoolz.com/t/vlock-oracle-rac-gvlock-script.html
locks.sql https://s3.amazonaws.com/mve-shared/locks.sql
oracle event monitoring framework http://www.dbatoolz.com/event-monitoring-system
eventorex mailing list http://eepurl.com/biebuh
eventorex mailing list http://eepurl.com/biebuh
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
scripts http://www.dbatoolz.com/c/scripts
gv$lock http://www.dbatoolz.com/t/tag/gvlock
gv$session http://www.dbatoolz.com/t/tag/gvsession
rac http://www.dbatoolz.com/t/tag/rac
v$lock http://www.dbatoolz.com/t/tag/vlock
oracle ash monitoring http://www.dbatoolz.com/t/oracle_ash_monitoring.html
oracle ash top waits script: h1.sql https://s3.amazonaws.com/mve-shared/sql/h1.sql
h1d.sql https://s3.amazonaws.com/mve-shared/sql/h1d.sql
eventorex mailing list http://eepurl.com/biebuh
eventorex mailing list http://eepurl.com/biebuh
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
scripts http://www.dbatoolz.com/c/scripts
ash http://www.dbatoolz.com/t/tag/ash
awr http://www.dbatoolz.com/t/tag/awr
v$active_session_history http://www.dbatoolz.com/t/tag/vactive_session_history
oracle webiv http://www.dbatoolz.com/t/oracle-webiv.html
oracle insights http://www.apress.com/9781590593875
project organization http://www.dbatoolz.com/t/text_based_note_taking.html
golang https://golang.org/
community http://kb.dbatoolz.com/
join our community! http://kb.dbatoolz.com/
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
webiv http://www.dbatoolz.com/t/tag/webiv
text based notes http://www.dbatoolz.com/t/text_based_note_taking.html
markdown http://daringfireball.net/projects/markdown/syntax
nvalt http://brettterpstra.com/projects/nvalt/
bbedit http://www.barebones.com/products/bbedit/
hashjoin.com http://www.hashjoin.com/about
confessions of an oracle dba http://eepurl.com/bswgxb
subscribe http://eepurl.com/bswgxb
vitaliy mogilevskiy http://www.dbatoolz.com/t/author/admin
operations http://www.dbatoolz.com/c/ops
evernote http://www.dbatoolz.com/t/tag/evernote
markdown http://www.dbatoolz.com/t/tag/markdown
text http://www.dbatoolz.com/t/tag/text
2 http://www.dbatoolz.com/page/2
3 http://www.dbatoolz.com/page/3
4 http://www.dbatoolz.com/page/4
» http://www.dbatoolz.com/page/2
- http://www.dbatoolz.com/oracle-dba-los-angeles
vitaliy mogilevskiy http://www.dbatoolz.com/oracle-dba-los-angeles
- http://kb.dbatoolz.com/feed/rss_v2_0_tps.xml
community http://kb.dbatoolz.com/
bad blocks http://kb.dbatoolz.com/tp/4324.bad_blocks.html
increasing control_file_record_keep_time http://kb.dbatoolz.com/tp/2752.increasing_control_file_record_keep_time.html
pillar axiom 500 - creating luns for an oracle database http://kb.dbatoolz.com/tp/2735.pillar_axiom_500_-_creating_luns_for_an_oracle_database.html
manual database creation http://kb.dbatoolz.com/tp/2609.manual_database_creation.html
need information in using active memory sharing http://kb.dbatoolz.com/tp/5043.need_information_in_using_active_memory_sharing.html
advice with asm config for greater then 10tb instance http://kb.dbatoolz.com/tp/5041.advice_with_asm_config_for_greater_then_10tb_instance.html
issue with oraenv and oratab file. http://kb.dbatoolz.com/tp/5040.issue_with_oraenv_and_oratab_file__.html
upgrade oracle 11c to oracle 12g on aix 7 http://kb.dbatoolz.com/tp/5039.upgrade_oracle_11c_to_oracle_12g_on_aix_7.html
disk in mode 0x7f marked for de-assignment http://kb.dbatoolz.com/tp/5032.mounting_disk_group_on_single_instance_asm_throwing_error.html
opatch auto apply error: failed to get acl entries: permission denied http://kb.dbatoolz.com/tp/5036.opatch_auto_apply.html
cloud/soa http://www.dbatoolz.com/c/cloudsoa
data guard http://www.dbatoolz.com/c/data-guard
events http://www.dbatoolz.com/c/events
installs http://www.dbatoolz.com/c/installs
linux http://www.dbatoolz.com/c/linux
mac http://www.dbatoolz.com/c/mac
operations http://www.dbatoolz.com/c/ops
python http://www.dbatoolz.com/c/python
rac http://www.dbatoolz.com/c/rac
rman http://www.dbatoolz.com/c/rman
scripts http://www.dbatoolz.com/c/scripts
tuning http://www.dbatoolz.com/c/tuning

Zdjęcia

Zdjęcia 23
Zdjęcia bez atrybutu ALT 0
Zdjęcia bez atrybutu TITLE 13
Korzystanie Obraz ALT i TITLE atrybutu dla każdego obrazu.

Zdjęcia bez atrybutu TITLE

https://s3.amazonaws.com/mve-shared/rcdml/fullres/big_data_rcdml.png
https://s3.amazonaws.com/mve-shared/rcdml/fullres/replication.png
https://s3.amazonaws.com/mve-shared/rcdml/fullres/bulk_copy.png
https://s3.amazonaws.com/mve-shared/tkprof/lowres/topas_sample_solaris.png
https://s3.amazonaws.com/mve-shared/tkprof/lowres/tkprof_top_sql_output_example.png
https://s3.amazonaws.com/mve-shared/sqlplus/fullres/sqlplus_uname_alias.png
https://s3.amazonaws.com/mve-shared/oracle_asm/lowres/asm1.png
https://s3.amazonaws.com/mve-shared/oracle_asm/lowres/asm2.png
https://s3.amazonaws.com/mve-shared/oracle_asm/lowres/asm4.png
https://s3.amazonaws.com/mve-shared/oracle_asm/lowres/asm_issue_example.png
https://s3.amazonaws.com/mve-shared/h1_sql_output_sample1.png
https://s3.amazonaws.com/mve-shared/webiv_testimonial.png
http://www.dbatoolz.com/wp-includes/images/rss.png

Zdjęcia bez atrybutu ALT

empty

Ranking:


Alexa Traffic
Daily Global Rank Trend
Daily Reach (Percent)









Majestic SEO











Text on page:

dbatoolz subscribe oracle to amazon rds big data migration strategy i analyzed three big data migration strategies performed at twitter, facebook and netflix. as well as the one we did at eharmony. and found some commonality amongst them which i distilled down to a repeatable strategy. this strategy consists of a five step process that migrates big data from a source to a target database: replication to staging bulk copy to target downtime at source merge staging with bulk copy go live on target for brevity, let’s coin this repeatable pattern of big data migration strategy as rcdml: let’s go over some key points of the rcdml process flow: the replication starts right before the bulk copy. replication stream is directed into the staging database. bulk copy is pushed directly into the target database. the reason for the above split is to ensure the checkpoints and validations are isolated to each data set. optional downtime begins immediately after the bulk copy, and it ends with merge. validation should happen before and after the merge ensuring that record counts match. during the merge, replicated data from staging overwrites the bulk copied data. go live. my conclusion is that rcdml process can work at any scale. in other words – the downtime will be proportional to the amount of data generated by the live changes and not the actual big data. that’s because the bulk copy takes place outside of the downtime period. obviously – the bulk copy should be completed as fast as possible to keep the replicated data set small. this reduces the time it takes to merge the two data sets together later. however, since most big data platforms have a high performance bulk loading tools – we can parallelize them and achieve a good throughout keeping the replication window small. there is also something else very interesting … you see, i used to think that replication always belonged to the dba team. and that dbas should just pick an off the shelf replication tool and bolt it on to the source database. however, what i learned is actually quite the opposite, so let’s talk about this in the next chapter. replication i prefer to develop a queue based replication process and place it directly into the main data service and channel all writes to the data layer through it. this creates an embedded high availability infrastructure that automatically evolves together with the data service and never lags behind. the replication dispatch is where you plug-in the replication queues. each queue can be autonomously switched off to shut down the data flow to the database during a maintenance window. upon completion of the maintenance the queue is turned on to process the replication backlog. an alternative method would be to bolt-on a separate replication mechanism to an existing production database outside of the main data path. however be aware of the pitfall in doing this! your development team will most likely be distant from implementation and maintenance of this bolt-on replication. instead, it will be handed over to the dba/infrastructure team. and subtle nuances will slip through the cracks potentially causing data issues down the line. your dba/infrastructure team will also be fully dependent on the third party support services for this replication tool. and this will drain your team of the invaluable problem solving skills and instead turn them into remote hands for the replication vendor. i believe that a successful replication process should be developed in-house and placed directly in the hands of the core data services team. and it should be maintained by this development team and not the infrastructure/dba team. regardless on the replication method you choose – it should be turned on right before the bulk copy begins to ensure some overlap exists in the data. this overlap will be resolved during the merge process. i hope that i convinced you that the replication should be handled by the core data services dev team. but, what about bulk copy? who should be responsible for the bulk copy? dbas or developers? let’s find out! bulk copy it would be impossible to cover all of the bulk copy tools here because there are so many database platforms you might be migrating from and to. instead, i’d like to focus on the fundamental principles of getting a consistent big data snapshot and moving it from database a to database b. however, i still think it’s valuable to present a concrete visual example to cover these principles. and since my big data migration strategy experience is with an oracle rdbms on a san based shared storage – let’s use that as a baseline for this chapter: before we go over the above bulk copy process flow, i’d like to emphasize that replication has to be on before the bulk copy begins. this ensures that there is some overlap in the two data sets which we resolve during the merge phase of the rcdml process. the objective is to copy a very big data set from a source database prod to a target database target. and the fundamental principles to follow are as follows: consistent data set low impact on source transform only on target one hop data transfer isolate bulk copy and replication consistent data set it’s imperative to extract a fully consistent data set from the source database. this will not only make the data validation phase easier, but actually possible. because you need to be able to trust your extraction process. and a basic verification that the number of rows between source and target match is both simple and invaluable. low impact on source the bulk copy tools are fully capable of taking 100% i/o bandwidth at the source database. for this reason it’s important to find the right balance of throughput and i/o utilization by throttling the bulk copy process. i prefer to monitor response times directly at the source database storage subsystem. the goal here is to not let it go above accepted baseline such as 10ms. which brings us to the second interesting point … i used to think that bulk copy belong to the etl development team. today, i think it really belongs to the dba team. and it’s because dbas are closer to the storage and database and can clearly see the impact the bulk copy tools have on these components. sure, we can have dbas and developers work together on this, but, in reality, unless they sit in the same room – it never works. there needs to be 100% ownership established for anything to be accomplished properly. so instead of creating a finger pointing scenario it’s best to ensure that the fingers are always pointed back at us. if there is no one else to blame – stuff gets magically done! transform only on target it’s often necessary to transform the data during migration. after all, how often do you get a chance to touch every single row and fix the data model? there are two places where this transformation can be accomplished – during extract or reload. i think it’s best to leave the extracted data set in exact same state as it was found in the source database. and instead – do all transformation during reload, directly on the target database. doing reload on the target database sets up an iterative development cycle of the transformation rules. and removes the need for a fresh source extract for each cycle. it will also make the data validation process possible because extracted data will mirror the source. one hop data transfer physically moving the big data is a monumental task. and for this reason the unload processshould deposit the data someplace where the reload process can read it directly. setting up this one hop data transfer is best through a network attached storage (nas) via a 10 gigabit ethernet network which is now widely available. isolate bulk copy and replication it’s very important to measure each data set independently using a simple count. and then, count the overlap between them. this will make the validation of the merge process as simple as adding the two and subtracting the third (overlap). downtime if the replication is setup directly in the core data service, and if it’s utilizing a queue, we can pause it and begin the merge process immediately without any downtime. on the other hand, if the replication is a bolt-on process reading transactional logs from the primary database, then the only option is to shutdown and do the merge during a real outage. that’s because the writes and reads in this case go directly into the primary database and not through a switched data service. and as such, require a hard-stop, configuration change to point to a different database. that is, if the objective is to switch to the target database following the merge. merge first step is to get the counts of the bulk copy and the replication rows and also the count of their overlap. next, it’s most often faster and easier to delete the overlap from the bulk copy set and then simply insert/append the replication set into it. finally, a validation of the counts either gives the green light for the go-live or makes this run just another rehearsal. speaking of rehearsals – there should be at least three, with the final rehearsal one week within the go-live schedule. having the last rehearsal as close to production date as possible ensures that the data growth variance is accounted for in the final schedule. and it pins the migration process in the team’s collective memory. go live the process of going live on the new target database is directly related and dependent on the replication options chosen earlier. we covered this topic in the downtime phase. specifically, if the replication is queued, switched and built into the core data service, then going live is as simple as setting the new database as primary in the dispatch configuration. this also means there was no downtime during the merge. on the other hand, if the replication is a bolt-on-read-transaction-logs type, then the downtime is already active. and we need to update configuration files for each individual service pointing them to the new database. conclusion in summary: there are 5 phases of the rcdml big data migration process, and with careful planning it’s possible to make it work at any scale. the two most critical components of any big data migration strategy are replication and bulk copy, and giving the responsibility for these components to the right team can directly effect the downtime and go live schedule. if you found this article helpful and would like to receive more like it as soon as i release them make sure to sign up to my newsletter below: subscribe vitaliy mogilevskiy january 31, 2016 posted in: operations tags: big_data, bulk_copy, replication breaking oracle: beyond 429496729 rows it’s tue nov 4 10:55 2014.  i finally get “4294967295 rows created” after 1 hour and 50 minutes has elapsed.  that’s (232 – 1) – the number oracle engineers thought you’d never reach using the data pump tool.  but we reached and went beyond it a long time ago. in fact we are so far beyond it – we doubled that number and keep adding to it over 10 million daily. and today we are decoupling our matching data store consisting of 8.9 billion matches from our customer facing database.  all this data is going into it’s own database cluster.  the end goal?  tackle the “big data” problem and scale the matching microservices independently. it’s not what you did, son! what we are doing is not ground breaking. twitter did something similar.  they moved their “largest (and most painful to maintain) table – the statuses table, which contains all tweets and retweets” from mysql to cassandra back in 2010. netflix moved their user’s movie queue from simpledb to cassandra back in 2013 using a similar approach. but as viggo tarasov said “it’s not what you did, son. it’s who you did it to”!  and in this case we are doing this migration on an oracle centric platform, but instead of relying on the “big iron” replication technologies we developed our own migration service and drafted a repeatable technique of dealing with the big data problem. this technique combines batch and realtime processing to apply the same set of functions on the whole data set to produce a consistent result. you could even argue that it’s a similar pattern used in the lambda architecture.  but instead of a query being the end-result – we are after a whole data set conforming to the new data model. what’s the plan? the migration plan we came up with has a 5-step process: eharmony matching database migration at t0 we turn on an event driven “dual-write” migration service to mirror the writes from our “legacy” database to the new database cluster. these writes go into a dedicated “empty” staging schema. if it’s an update on the source db and no record exists in the staging schema then we create a new one. this migration service would be applying the same transformation rules as the “backfill” process (see 3) and the two sets of data merge afterwords. at t1 we create a consistent “as-of” snapshot of our “legacy” database, mount it on a standalone node and extract the data to a shared storage. start the “backfill” batch process and load the data as of t1 into the new database cluster applying transformation rules, deduplication, partitioning and re-indexing at the same time. the “backfill” data lands into it’s own dedicated schema which is separate from the “dual-write” schema (see 1). once the “backfill” batch process finishes – we briefly pause the “dual-write” migration service and merge it’s data with the “backfill” schema. “dual-write” data overwrites any overlap with the “backfill” data because it’s the lastest version from the “legacy” database. finally, restart “dual-write” migration service and continue mirroring the writes to keep the two databases in sync. once the two databases are in sync we start to deploy new microservices on the new datastore. we then sample these microservices for a small cohort of users carefully watching for any failures and performance issues. in the event of a failure we simply revert back to using the old service/datastore, fix the problem and repeat the process until our confidence level is high (all the while the dual-write migration service is keeping the two data stores in sync). and the obvious end result is switching all users to the new datastore. what follows next is the peek into the tooling and instrumentation methods that got us through the steps 2–5. instrument or else! last night we did steps 1 and 2 – and the historical data extract finished in 8 hours 8 minutes and 11 seconds – that’s 13 hours faster than the last rehearsal during which we had some problems with our shared storage system. we keep the prior stats for reference right in the “run-doc” comment section (i go into the “run-doc” structure a little further). to create a consistent snapshot we placed the source database in a backup mode and took a storage level (san) snapshot of the underlying volumes, we then presented the snapshot to the staging database node and restored the database bringing it to a consistent state. at this point we started an extract process using oracle’s data pump tool with 16 parallel workers dumping the data to a shared storage via nfs. for any long running process (over few minutes long) we also snapshot the underlying system performance metrics using continuous monitoring we developed in-house – this allows us to get an early warning if the system is struggling with resources while running the job: eharmony matching database monitoring without this instrumentation we’d be flying blind and would only know there is an issue after the prior rehearsal’s elapsed time has passed. i like to copy/paste so far so good – we are ahead of schedule and i am now driving the “backfill” process which consists of steps 8 through 16: eharmony matching database migration schedule the “run-doc” is a series of copy/paste commands that call independent shell scripts – you could string them together in a master script or run them one at a time. each individual script checks for completion errors and emails result on exit. the run-doc limits the error-prone decision making process during production rollout and lets you focus on the task at hand even when you are half asleep. the last thing you want to be doing when tired is doubting yourself while figuring out the next step. keep your hands off! part of the migration process is the transformation of the legacy data into a newly designed data model capable of absorbing the big data flow and conforming it to the new set of stringent constraints. you can do the transformation during extract or reload. our testing showed that applying these data transformation rules during the “backfill/reload” process performed faster because our target database cluster had more cpu power than the standalone staging server where we extract the legacy data. it also helps when the original extract mirrors the legacy data just in case there are any questions during the data validation phase. needle in a haystack! our new data model and indexing strategy revealed a problem – duplicates! take 8.9 billion matches double them for indexing by flipping the user1 and user2 ids (you can now lookup by either id) and what you get is a cross match duplicate in a rare case when two distinct matches were mistakenly made for the same pair of people: source to target row mapping i thought it was an easy problem to solve … and it was – during the first rehearsal, when i simply used a group by function to find the duplicates. but during our second rehearsal we must have hit a critical mass and the same exact query plan generated twice the amount of sort – all on disk. we ran out of temp space. i was getting nowhere – three runs each lasting over 40 minutes failing on space every time. i gave the temp 2tb of space, it consumed it and i threw in the towel. thankfully oracle has a powerful set of analytic functions that can deal with this problem more efficiently than a simple group by function ever could. specifically the “row_number() over (partition-clause order-by-clause)” that assigns a unique number for each row in the ordered data partition as defined by the partition-clause. and when the row_number() is applied to a data set with duplicates these dups resolve to a row_number() > 1 and it’s easy to filter them with a simple where clause predicate. running this filter on a full data set of 17.8 billion matches took only 50 minutes [elapsed: 00:50:02.89]. do you have the brakes? after purging the duplicates we moved on to the indexing process using a custom developed package that takes a table name, number of concurrent workers and the id of the database node on which to run the index rebuild as it’s input parameters. it spawns x-number of the index rebuild workers that read the queue table which contains a dynamically built list of table partitions to work on. we can stop/start the index rebuild workers at any time and they’ll pick up where they left off. this capability was essential during the testing phase and allowed us to carefully adjust the index rebuild parameters by monitoring it’s effect on the database cluster and the storage sub-system. are we there yet? thankfully it’s easy to answer because (1) the index rebuild workers keep a progress log (using autonomous transactions) (2) simply looking at the queue table: index rebuild progress this level of instrumentation tells us exactly what table partition we are at, the counts of finished vs pending and an average time it took to rebuild all indexes for each partition: per partition index rebuild stats it’s also easy to tell when it’s all done by checking the autonomous transaction table log: autonomous logging table at the end we also get a per table summary with a total time spent on it: index rebuild summary stats do you know what you are doing? oracle optimizer doesn’t, unless you give it the table/index stats it needs to come up with the correct query plan. to get these stats you run a gather statistics process that can take a full day to complete. this is not optimal during production migration so instead we gathered these stats ahead of time, exported them out and now we simply import them using dbms_stats.import_schema_stats – all it takes is a few seconds. just make sure to watch out for the auto-generated hash partition names [sys_p%]. we had to write a small procedure to rename them to match the names in the statistics. the moment of truth at this point the backfill activities are over – data set is clean, partitioned and indexed. we briefly stop the dual-write migration service, take a count of rows: backfill historical matches: 8867335138 dual-write online matches: 23046176 then purge all overlapping matches from the historical data set by getting match_id from the online data set and using nested loop via a hint [use_nl] as it was the most efficient query plan for this operation during out testing phase. as a side note we hit 30k iops during this operation with service times under a millisecond: 3par hits 30k iops next step of merging the two sets of data is a simple matter of doing direct path insert [insert append parallel] with 32 slaves and just letting oracle manage index maintenance – direct path insert puts index partition in an unusable state during the data load but it also keeps track of it’s changes via undo segments and resolves the delta after the data load completes. it only took 1.5 minutes (01:30.13) to insert the 23046176 rows and do the index maintenance. last step is to simply flip the synonyms for the database access points that online/batch apis hit and turn the system over for use. mission accomplished! summary in summary – our big data restructuring approach breaks up the data set into a historical “as-of” snapshoot and the live data stream. historical data set is handled by a batch process and the live data by a migration service that gets events from a message broker (we used hornetq). the two sets merge afterwords and the message broker handles the backlog of the live data stream during the merge. if you found this article helpful and would like to receive more like it as soon as i release them make sure to sign up to my newsletter below: subscribe vitaliy mogilevskiy january 15, 2016 posted in: operations tags: 429496729, big_data, data_pump tkprof: when and how to use it as an oracle specialist you’ve probably heard of sql trace and it’s brother tkprof. sql trace generates a low level trace file that has a complete chronological record of everything a session is doing and waiting for when it “talks” to the database. tkprof on the other hand takes that trace file and aggregates all of the low level details in an easy to read report. this report can then be quickly analyzed to find the root cause of the slow performance. there are a lot of samples online showing the use of tkprof in a lab environment. but i haven’t seen one that actually shows you a real use-case similar to what you’ll face on your job. so instead of giving you all possible options of turning on sql trace and all possible flags tkprof utility accepts i’ll show you exactly how and when i use it with a real example. sounds good? when not to use tkprof first, lets cover when tkprof is not a good option. do not use tkprof to analyze short lived sessions that are typical on a very busy website. you are much better off looking at ash monitoring. it’s because only ash can give you an aggregated view across all sessions and what they are waiting for. it can also pin point the sql_id across all of these sessions that is generating the most i/o waits. so use ash when you are troubleshooting a performance problem that is across 99% of all sessions in the database. an example of this would be when you get a call from your dev-ops team saying that their response time from the database has gone up substantially in the last 15 minutes. what tkprof is good for my #1 use-case for tkprof is when i am tracing a bad performing batch job. batch jobs work on a large set of data either looping over each record or performing a set operation on all of them. and for this reason a batch job is usually long running and a developer will typically have an expectation when it should complete. when a job doesn’t complete on time – you will either get a call to look into the problem or your monitoring system will alert you that there is a long running process consuming large resources on the database server. in either case you are now tasked with finding the root cause of a problem. and sometimes your monitoring system will give you enough details to pin point the problem (usually a single poorly written sql). but there are cases when you need to dig deep. there could be multiple poorly performing sql statements that are switching back and forth making it difficult to tell which one is the culprit. and in these instances sql trace and tkprof really come in handy, so lets go over the whole process now. identify sid/serial first we need to identify the sid/serial# of the job so we can enable sql trace on it. there are couple of ways you can do this: option-a: if your monitoring system picked up the poorly performing session sid/serial# – then move on to the next step (enable sql trace), otherwise choose one of the two options below. option-b: i always like to get on the actual database node and check how much damage this sessions is really doing. most of the times the poorly performing batch job will be burning up cpu and i/o. this makes it very easy to find it using the following shell command which lists the top 10 process ids (pids) currently active on the server: ps -eo pcpu,pid,user,args | sort -k 1 -r -n | head -10 the above snippet works on linux. here’s an equivalent for sun solaris: /usr/ucb/ps -aux | head i usually save this snippet in a file called topas and put it in my user’s personal bin directory so i can execute it without the need to remember the long syntax. btw, the reason i call it topas is because there is a great utility on ibm aix that does exactly this function and it’s called topas. tip: if you are on oracle rac, you will need to first identify which node the job is running on. i prefer to constrain batch jobs to run on a specific node using dedicated connections. this makes it much easier to find them later (i create a custom db service and ask developers to use it when making connection to the database). here’s a sample output of the above snippet that clearly shows that the pid (4089) is the culprit we need to trace (it’s consuming the most amount of cpu on the server): once we have the pid we can find sid using the following sql: clear col set head on set pages 60 set lines 300 set trims on set tab off col sid format 9999 col serial# format 999999 col username format a15 col machine format a15 trunc col osuser format a12 col maction format a45 col status format a12 col spid format a10 col process format a10 col event format a30 trunc col seconds_in_wait format 9999 heading secw select s.sid,s.serial#,s.username, s.status,s.osuser,s.machine,s.process,s.event, s.seconds_in_wait, p.spid, s.module||' '||s.action maction from v$session s , v$process p where (s.process='&&1' or s.paddr = (select addr from v$process where spid='&&1')) and s.paddr = p.addr; exit; save the above script in /tmp/pid.sql and call it from sqlplus as follows: pid= sqlplus -s "/ as sysdba" @/tmp/pid.sql $pid it should produce a report giving you the sid/serial# for the batch job in question. option-c: what if you don’t know which database node the job is running on? or, what if your db server is so utilized by other processes that topas doesn’t put the runaway job at the top of it’s list? in this case, i ask the developers to give me the name of the machine the job was submitted from and optionally the client_identifier they use for the process. i then use the following shell script to find the sid/serial for this job: #!/bin/ksh # $id: msid.sh 30 2016-01-07 23:36:07z mve $ # copyright 2016 hashjoin (http://www.hashjoin.com/). all rights reserved. node=$1 cid=$2 if [ ${node}"x" == "x" ]; then print -n "enter machine name: " read node print fi if [ ${cid}"x" == "x" ]; then client_id="" else client_id="or s.client_identifier like '${cid}'" fi sqlplus -s /nolog < " define _editor=vi set echo on if you save the above code in a script called login.sql and then place this script in the directory where you start sqlplus from – you’ll get the same result. i highly recommend doing this. sqlplus /nolog the /nolog tells sqlplus to skip the login and go directly to the sqlplus prompt where you can make a connection. i use this when calling sqlplus from a shell script directly on the oracle database server because it allows me make a connection using connect / as sysdba and then pass sqlplus some quick commands to process. for example here’s a quick way to dump an oracle systemstate in order to find which session is holding a particular library cache lock (the example below works on 11g and above): sqlplus /nolog <'${sqlscript}'); dbms_session.set_identifier('${sqlscript}'); commit; end; / @${sqlscript}.sql $sqlparams spool off exit eof mailx -s "${sqlscript} done `date`" $dba < ${sqlscript}.log we can then call it as follows: nohup ./exec_sql.sh mysql_script > mysql_script.hn & tail -f mysql_script.hn we are executing a sql script mysql_script.sql and piping it’s output to mysql_script.hn which we then start viewing “live” using tail -f. and while the above script is executing we can open another sqlplus session to the same database and execute the following sql to monitor what the script is doing or waiting for: set lines 132 set pages 1000 set trims on col client_identifier format a20 col action format a17 col p1text format a14 col event format a30 select inst_id,event,p1raw,max(seconds_in_wait) max_wait, trunc(avg(seconds_in_wait)) avg_wait, count(*), state from gv$session where client_identifier = 'mysql_script' and wait_time=0 group by inst_id,event,p1raw,state order by inst_id,event; as soon as the script is finished the exec_sql.sh will send us an email with a subject “mysql_script done date” and pipe the log file generated by the script in the email body for our review. and there you have it – we just went over my favorite ways to utilize sqlplus in shell scripting. armed with these techniques you can start developing some very elaborate automation scripts in your oracle environment. the best way to start with this is to just try something small – find a problem you need solved and slowly build a script to attack it. happy scripting! and if you found this writeup useful please subscribe to my newsletter and get new articles as soon as i post them: subscribe vitaliy mogilevskiy december 30, 2015 posted in: operations, scripts oracle asm diagnostics script as i mentioned in oracle tablespace monitoring, i worked for a company that operated a very large internet site and our space consumption was at the rate of 800gb every month. we were adding 4x400gb luns to our rac cluster every two months. and this amount of space needed to be added not only to the primary production site, but also to the snapshot db, reporting db and the physical standby db in a remote datacenter. needless to say, i’ve seen a number of multipath/storage and asm issues and to make my life easier i developed an oracle asm diagnostics script that allowed me to perform some basic health-checks on the state of asm groups and disks as seen from the database side. the script is called lasmdsk.sh and it does the following: check if asm is running and parse the asm instance id (asm1,2,3,n etc): parse the asm oracle_home from /etc/oratab using the asm instance id from previous step and set up oracle_sid and oracle_home environmental variables accordingly: run asmcmd command and print the following attributes: state type rebal sector block au total_mb free_mb req_mir_free_mb usable_file_mb offline_disks voting_files name and finally dig deep into the asm data dictionary joining gv$asm_disk and gv$asm_diskgroup to check for the most common issues we’ve seen in our shop while adding san provisioned multipath’ed luns to our databases: however the true utility of this script is in how quickly and easily it allows me to filter the output of the above query. and to really demonstrate this – let me give you a real example of how i add new luns under asm and provision them to a real production data group: lets say i just created four new asm disks (data_105, data_106, data_107 and data_108) using sudo /etc/init.d/oracleasm createdisk command and then i didscandisks and listdisks on all rac nodes. now it’s time to verify the gv$asm_disk.header_status = provisioned. i could setup oracle_home and sid and path variables to point to asm/grid oh and then login to sqlplus and run the query selecting header_status from gv$asm_disk where name ... hmm … do you see the problem? i now have to use an in or or operator to get all 4 disks because there is no common pattern to give to a like operator unless i use regex and who is going to do that on a fly? contrast this with my script instead: wget https://s3.amazonaws.com/mve-shared/bin/lasmdsk.sh ./lasmdsk.sh “_105|_106|_107|_108” easy! and it works because inside the script i wrap the query in a shell function and then pipe it’s output to egrep which does the filtering faster and easier than it’s possible inside oracle: get_asm | egrep "inst_id|^--|${asmdisk}" now i simply run ./lasmdsk.sh “_105|_106|_107|_108” , check that header_status = provisioned and move to the next step which is creating a test disk group and adding the 4 new disks to it to make sure everything works as expected: asmid=`ps -ef | grep pmon | grep asm | awk '{print $nf}' | sed 's/asm_pmon_//g'` oracle_home=`grep ${asmid} /etc/oratab | awk -f: '{print $2}'` export oracle_home path=$path:$oracle_home/bin export path oracle_sid=${asmid}; export oracle_sid sqlplus / as sysasm < @locks.sql blocked objects from gv$lock and sys.obj$ inst_id sid lmode min_blocked blocked_obj ---------- ---------- ---------- ----------- ----------------------------------- 3 3961 3 0 appuser_owner.dbjobrequests 3 3866 3 0 appuser_owner.dbjobrequests 5 3887 3 0 appuser_owner.dbjobrequests 3 3484 3 0 appuser_owner.dbjobrequests 3 3161 3 0 appuser_owner.dbjobrequests 3 2998 3 0 appuser_owner.dbjobrequests 3 2979 3 0 appuser_owner.dbjobrequests 3 2752 3 1 appuser_owner.dbjobrequests 3 2618 3 0 appuser_owner.dbjobrequests 3 2610 3 0 appuser_owner.dbjobrequests 3 2456 3 0 appuser_owner.dbjobrequests 3 2368 3 0 appuser_owner.dbjobrequests 3 2243 3 0 appuser_owner.dbjobrequests 3 2134 3 0 appuser_owner.dbjobrequests 3 2132 3 0 appuser_owner.dbjobrequests 6 3854 3 0 appuser_owner.dbjobrequests 6 3507 3 0 appuser_owner.dbjobrequests 6 3417 3 0 appuser_owner.dbjobrequests 6 3303 3 0 appuser_owner.dbjobrequests 6 3222 3 1 appuser_owner.dbjobrequests 6 3135 3 0 appuser_owner.dbjobrequests 6 2804 3 0 appuser_owner.dbjobrequests 6 2786 3 0 appuser_owner.dbjobrequests 4 3818 3 0 appuser_owner.dbjobrequests 4 2869 3 0 appuser_owner.dbjobrequests 25 rows selected. elapsed: 00:00:00.03 blocked sessions from gv$lock inst_id blocker_sid inst_id blocked_sid min_blocked request ---------- ----------- ---------- ----------- ----------- ---------- 4 3084 6 3135 0 6 4 3084 6 3485 0 6 2 rows selected. elapsed: 00:00:00.02 blocked session details from gv$session and gv$sqltext instance........ : 6 sid ............ : 3135 serial ......... : 30604 username ....... : app1user_name sql id ......... : null prev sql id .... : gm424t8fyx3w6 displayed sql id : gm424t8fyx3w6 client info .... : null machine ........ : dbt4.dc1.mydomain.com osuser ......... : dbt process ........ : 1234 action ......... : jdbc thin client sql_text ---------------------------------------------------------------------- select this_.workrequestid as workrequ1_1_0_, this_.createtime a s createtime1_0_, this_.event_type as event3_1_0_, this_.status as status1_0_, this_.userid as userid1_0_ from dbjobrequests thi s_ where this_.workrequestid = :1 and this_.status=:2 for upda te instance........ : 6 sid ............ : 3485 serial ......... : 45149 username ....... : app1user_name sql id ......... : null prev sql id .... : gm424t8fyx3w6 displayed sql id : gm424t8fyx3w6 client info .... : null machine ........ : dbt5.dc1.mydomain.com osuser ......... : dbt process ........ : 1234 action ......... : jdbc thin client sql_text ---------------------------------------------------------------------- select this_.workrequestid as workrequ1_1_0_, this_.createtime a s createtime1_0_, this_.event_type as event3_1_0_, this_.status as status1_0_, this_.userid as userid1_0_ from dbjobrequests thi s_ where this_.workrequestid = :1 and this_.status=:2 for upda te 10 rows selected. elapsed: 00:00:09.33 blocker session details from gv$session and gv$sqltext (current or previous sql) instance........ : 4 sid ............ : 3084 serial ......... : 8911 username ....... : app1user_name sql id ......... : null prev sql id .... : 629vx81ykvhpp displayed sql id : 629vx81ykvhpp client info .... : null machine ........ : dbt1.dc1.mydomain.com osuser ......... : dbt process ........ : 1234 action ......... : jdbc thin client sql_text ---------------------------------------------------------------------- update dbt_lock set finished=:1 , version=:2 where user_id=:3 and version=:4 2 rows selected. elapsed: 00:00:10.13 this script presented few performance challenges because a self-join query against gv$lock joined with sys.obj$ to get a list of blocked objects is very expensive in a cluster environment, in fact it’s expensive even in a single instance environment. we also have to join gv$session with a result of self-join query against gv$lock in order to get the sql_text of the sessions doing blocking and being blocked – that’s extremely slow as well. to solve the above performance challenges i created two tables and indexed them appropriately: gv$ table copy table indexed columns gv$lock gv_lock_mon type,block gv$session gv_session_mon inst_id,sid once that was done it was a simple matter of replacing gv$ table name with copy table name on the key joins and performance shot up through the roof. in fact, it was so lightweight that i created a custom event in my monitoring system and started to trap occurrences of these db blocks for historical purposes so that when a developer came to our team and asked us if there were any db locks/blocks 3 hours ago we could simply review our alerts and answer that question with authority providing exact details on the race condition that caused these blocks. this was much more helpful then the generic alert email we’d get from oem stating that session xyz is blocking this many sessions on instances 1,4 and 5 for example. my question to you is how do you monitor oracle locks? is oem alerting sufficient for your needs? do you think a solution such as the one i outlined above would be beneficial to your team? i am considering adding the oracle lock monitoring feature to the oracle event monitoring framework i am developing. if you think it’s a good idea then let me know by joining the eventorex mailing list and i’ll notify you on the progress and when the private beta becomes available. eventorex mailing list vitaliy mogilevskiy december 9, 2015 posted in: operations, scripts tags: gv$lock, gv$session, rac, v$lock oracle ash monitoring i have over 60 oracle diagnostic sql scripts in my arsenal.  a lot of them hit ash and awr.  i’ve tested these scripts on a high traffic web-site backed by an 6 node oracle rac cluster.  everyone of these scripts saved my day at one point, but there is one script that stands out from them all – oracle ash top waits script: h1.sql. oracle ash top waits this little gem is the #1 thing i reach for when i am asked to troubleshoot a performance problem that was reported hours ago. for example lets say it’s 9:30am and i get a call from a dev saying that her apex web application is hanging – she suspects a locking issue. all i need to know at this point is when the problem was first reported – armed with the start time (lets say 7:00am) i simply do this: sqlplus / as sysdba @h1 0700 0930 0 -1 -1 what’s happening here? there are 5 parameters: 1: start hhmi [0700 = 7:00am] 2: end hhmi [0930 = 9:30am] 3: days back [0 = today; or 7 = seven days back] 4: instance [1 = inst_id=1, give -1 for all] 5: service_hash [1225919510 = dba_services.name_hash, give -1 for all] and here’s what i get back: can you spot the problem? it’s “sql*net message from dblink”, there are no locks – it’s a simple problem of a badly written distributed query that is waiting on remote db. that was easy! one other hidden benefit of this script is that it saves it’s output in a table under a run_id (in this case run_id=81) allowing you to compare the output of two run_ids and clearly spot the differences in values grouped by wait event. this is extremely valuable especially when someone says – “this used to work last week!”, you simply do this: sqlplus / as sysdba @h1 0700 0930 7 -1 -1 the “7” in the third parameter instructs the script to look 7 days back for the same time frame (7-9:30am). the output of the above report will have it’s own run_id=82 (next in sequence) and you can now compare the two using a special script h1d.sql like so: @h1d 81 82 we didn’t need oem or any gui apps to get to the bottom of the problem – all because the diagnostics data is already in awr tables and is available to us directly from command line / sqlplus. years ago – we’d have to sample v$session_wait to get similar diagnostics, in fact, i wrote a complete monitoring system that utilized such technique. but now, oracle built this into the core engine in a form of active session history (ash) that automatically samples this data every second with practically no overhead to the database! that is an incredible level of instrumentation available to us and it would be a shame not to utilize it beyond what the oem reports are capable of. note however that ash sampling is short lived – only 1/10th of it’s sampling is saved in awr based on some internal thresholds oracle came up with. now imagine that we: take a script like h1.sql and refactor it to use ash instead of awr because ash samples are at higher fidelity (awr only get’s 1/10th of ash data). run this script every 3 minutes to capture the live heartbeat of the database. define your own thresholds on top of this sampling and get notified if something is amiss. save all this data for historical purposes so that even if awr gets wiped out you have solid performance metrics years later. wrap this all up in an easy to deploy (single binary) distribution which only takes a minute to install on a new host. are you interested? does this sound like something you’d like in your shop? if yes – then i’d like to get some feedback from you because i am building it! sign up for the eventorex mailing list to start the discussion and get the private beta as soon as it’s ready (no spam here i guarantee it!): eventorex mailing list vitaliy mogilevskiy december 2, 2015 posted in: operations, scripts tags: ash, awr, v$active_session_history oracle webiv here’s a quote about webiv from oracle insights: i had a pleasure of working with webiv while i was at oracle back in 2000-2001 and it was the most useful tool at our disposal: the interface was dead simple all articles were linked and indexed searches were lightning fast back/forward buttons actually took you back and forward copy/paste preserved the carefully crafted white space from the article i wonder if webiv is still live at oracle? i hope it wasn’t replaced by the same clunky interface that metalink (aka oracle support) eventually became. i still keep a bunch of legacy webiv copy/pastes in my notes folder for sentimental reasons – here’s one example: article-id: alias: suptool:orasniff circulation: under_edit (internal) ***oracle confidential - internal use only*** folder: support.systems topic: introduction to support systems title: suptool: orasniff - find installed versions document-type: reference impact: medium skill-level: casual updated-date: 03-jul-2000 09:39:28 references: shared-refs: authors: xxxxxxx.uk attachments: none content-type: text/plain products: 0; platforms: generic; 'orasniff' basically 'sniffs' around all the instances on a unix box for an installed oracle product. and every time i work a sticky sr with oracle support that gets nowhere – i always wish they would just give me access to webiv so i can find the solution myself. in fact, for a whole year after i left oracle my metalink account had full access to internal use only articles and bugs and it made my life so much easier through the oracle apps 11.0.3 upgrade that i drove at alcatel at that time. i brought up webiv because while working on porting my project organization bash script to golang i realized that i don’t need to make it command line tool only – instead, thanks to go’s built-in http server – i can make it a full blown web-based app that will run in the browser on your workstation! and why not model it after the infamous webiv?! you might think it’s a crazy idea – how can a legacy app from the previous era inspire me? well it does! simply because it embodies the term form follows function. and perhaps because i started on this road a while back and it was something i came across shortly after my oracle journey began so it holds a sentimental value? have you had the privilege of working with webiv? join our community and let us know! join our community! vitaliy mogilevskiy august 4, 2015 posted in: operations tags: webiv text based notes if you worked with me even for a day you must have noticed i am constantly taking notes – that’s because i believe note taking is the foundation to problem solving. here’s how it typically works – when i am called in to solve a problem on an existing project or to help architect a new solution i’ve noticed three repeating problem solving patterns i tend to use: prevention: if you avoid making a wrong decision in the first place it’s a way of solving a problem before it even occurs. and providing a solid example of what not to do is a great way to accomplish this. awareness: if you are aware of the weak links in the chain and the symptoms they typically exhibit you can orient and find the culprit quickly. profiling: if you profile the issue long enough, then overtime it becomes easier to identify the culprit by doing a pattern recognition exercise on the information you’ve gathered. i can confidently say that all of the above techniques hugely benefit from careful note taking and note organization. and in this post i’ll explain how i manage my notes using a very simple method. method that will help you organize notes and project files and help you find relevant information even years later by keeping the storage structure consisted from project to project. use text based notes i’ve tried popular apps like evernote but eventually gave up on them because apps that store notes in non-plain text fail in one fundamental requirement i have – keeping my carefully crafted whitespace intact. instead i use plain text with markdown syntax and can quickly convert any note to html or pdf format. text notes can be read on any device, any os and text is a timeless format that will be readable/searchable many years after you and i are gone. pick one place to store all your notes it’s extremely frustrating when you can’t find the information you know you had. a solution i’ve adapted is to store all my notes in one top folder/directory organized by client/project. i even put my personal notes there under a client called personal – that way if i am searching for something i wrote down – i know to always look there first regardless of the work/personal context. create a centralized searchable index for all projects this is key to finding things years later – i have a special folder called index and it contains a separate index file for each client/category combination that includes titles of every project under this particular client/category. for example if my client/category/project folder structure is: clients ├── client1 │   ├── proj │   │   ├── x_category_i │   │   │   ├── project_a │   │   │   ├── project_b │   │   │   └── project_c │   │   └── x_support │   │   │   ├── ticket_1234 │   │   │   ├── ticket_1235 │   │   │   └── ticket_1236 ├── client2 │   │   ├── x_category_v │   │   │   ├── project_a │   │   │   ├── project_b │   │   │   └── project_c │   │   └── x_support │   │   │   ├── ticket_2234 │   │   │   ├── ticket_2235 │   │   │   └── ticket_2236 then the index files will be as follows: index ├── client1x_x_category_i.txt ├── client1x_x_support.txt ├── client2x_x_category_v.txt └── client2x_x_support.txt and within each of the index files the contents will be as follows: client1x_x_category_i.txt 20150205-14:14:32 client1/proj/x_category_i/project_a 20150612-12:09:49 client1/proj/x_category_i/project_b 20150706-22:27:29 client1/proj/x_category_i/project_c client1x_x_support.txt 20150219-16:41:56 client1/proj/x_support/ticket_1234 20150307-11:45:55 client1/proj/x_support/ticket_1235 20150319-11:12:19 client1/proj/x_support/ticket_1236 client2x_x_category_v.txt 20150520-12:21:14 client1/proj/x_category_v/project_a 20150528-15:54:25 client1/proj/x_category_v/project_b 20150609-18:28:16 client1/proj/x_category_v/project_c client2x_x_support.txt 20150619-10:34:37 client2/proj/x_support/ticket_2234 20150624-13:36:46 client2/proj/x_support/ticket_2235 20150626-21:33:41 client2/proj/x_support/ticket_2236 i then use nvalt to index the index folder and in turn it allows me to quickly answer the following questions: what did we do for ticket_2236? – answer in client2/proj/x_support/ticket_2236 what project did i start on 20150609? – answer in client1/proj/x_category_v/project_c note that every client is suffixed with an x and every category is prefixed with an x_ – this ensures that when you search for a client or a category name and it happens to be a part of a common word the search results are cleaner. for example what if the category name is db – try searching for this in a repository full of database related topics – you’ll get everything, but when you change it to x_db – results are only filtered for that category. obviously the index files are automatically maintained by the program i wrote (which i’ll share on my next post so stay tuned). rotate long term project notes on daily basis we went over the client/category/project folder structure but what happens underneath that? it’s very simple – just plain text daily work notes that all share the name of the project and a timestamp of the day they were created on. for example under the folder client1/proj/x_category_v/project_c i might have: -rw-r--r--@ 1 mve staff 14126 jun 9 21:04 project_c_20150609.txt -rw-r--r--@ 1 mve staff 14689 jul 7 13:04 project_c_20150707.txt -rw-r--r--@ 1 mve staff 7314 jul 10 17:33 project_c_20150710.txt -rw-r--r--@ 1 mve staff 32269 jul 10 17:25 project_c_work.txt note the project_c_work.txt – it’s a special local index file where all daily work files check-in and where i can add project level information such as client contacts, key dates/deliverables etc.. at this point you might be wondering why i rotate project notes on a daily basis instead of having one single text file per project? the answer is simple – having daily work files allows for: better profiling – it’s easier to profile an issue when you can see how it progresses from day to day better searching – i use bbedit as my text editor and it allows me to open all files under a specific folder at once so if i switch to client1/proj/x_category_v directory and type in bbedit project_c – it’ll open all files under that project and keep them in the left pane of bbedit editor allowing me to quickly switch from day to day and see what i worked on. try to do this if everything is in one single file – not that easy automated daily status reports for my clients – because i use markdown format in my text notes i can quickly grep (search) for ^# (headlines) in the daily work file and it’ll give me all the headlines for the day. i then paste these headlines in an email/status report and my client gets a 30k ft view of the work done today. i also attach the daily work file to my customer support portal which i maintain for each of my clients on hashjoin.com and by doing so provide an ongoing knowledge base to them. takeaways prevention, awareness, profiling are the three major methods used in problem solving. careful note taking greatly benefits the above methods and improves your problem solving capacity. adapting a repeatable note taking workflow / organization will ensure you can find relevant information quickly. storing notes in plain text will ensure long term compatibility and preserve whitespace. placing all notes (personal and business) in a single top folder ensures you’ll know where to find them later. indexing the notes by client/category/project/date simplifies search. rotating project level notes daily can help profiling, searching and status reporting on long term projects/issues. if you liked this post – sign-up for my newsletter confessions of an oracle dba. subscribe vitaliy mogilevskiy july 13, 2015 posted in: operations tags: evernote, markdown, text 1234» vitaliy mogilevskiy - solving oracle problems since 1996. go! communitybad blocksincreasing control_file_record_keep_timepillar axiom 500 - creating luns for an oracle databasemanual database creationneed information in using active memory sharingadvice with asm config for greater then 10tb instanceissue with oraenv and oratab file.upgrade oracle 11c to oracle 12g on aix 7disk in mode 0x7f marked for de-assignmentopatch auto apply error: failed to get acl entries: permission deniedcategories cloud/soa data guard events installs linux mac operations python rac rman scripts tuning


Here you find all texts from your page as Google (googlebot) and others search engines seen it.

Words density analysis:

Numbers of all words: 12546

One word

Two words phrases

Three words phrases

the - 6.62% (831)
and - 2.91% (365)
for - 1.67% (209)
data - 1.58% (198)
thi - 1.53% (192)
all - 1.42% (178)
use - 1.39% (175)
rac - 1.38% (173)
you - 1.27% (159)
sql - 1.24% (156)
this - 1.22% (153)
her - 1.02% (128)
that - 0.96% (121)
tab - 0.95% (119)
– - 0.81% (101)
acl - 0.79% (99)
oracle - 0.78% (98)
here - 0.76% (95)
are - 0.73% (92)
our - 0.72% (90)
script - 0.71% (89)
set - 0.66% (83)
base - 0.66% (83)
... - 0.63% (79)
form - 0.63% (79)
cat - 0.6% (75)
it’s - 0.6% (75)
able - 0.59% (74)
database - 0.58% (73)
get - 0.57% (71)
not - 0.57% (71)
from - 0.53% (67)
app - 0.53% (66)
per - 0.51% (64)
one - 0.49% (62)
can - 0.49% (62)
asm - 0.49% (62)
client - 0.49% (61)
.... - 0.48% (60)
plus - 0.47% (59)
own - 0.47% (59)
low - 0.47% (59)
sqlplus - 0.46% (58)
out - 0.46% (58)
process - 0.46% (58)
with - 0.46% (58)
sys - 0.46% (58)
proj - 0.45% (57)
file - 0.45% (56)
time - 0.43% (54)
lock - 0.43% (54)
work - 0.42% (53)
format - 0.39% (49)
run - 0.37% (47)
then - 0.37% (47)
dba - 0.37% (47)
job - 0.37% (47)
sid - 0.37% (46)
name - 0.37% (46)
session - 0.37% (46)
│   - 0.36% (45)
disk - 0.36% (45)
let - 0.34% (43)
call - 0.34% (43)
even - 0.33% (42)
race - 0.33% (42)
note - 0.33% (42)
trace - 0.33% (41)
project - 0.32% (40)
will - 0.31% (39)
using - 0.31% (39)
put - 0.29% (37)
cause - 0.29% (37)
table - 0.29% (37)
log - 0.29% (36)
have - 0.29% (36)
when - 0.29% (36)
over - 0.29% (36)
col - 0.28% (35)
very - 0.27% (34)
---------- - 0.27% (34)
there - 0.27% (34)
how - 0.27% (34)
thin - 0.27% (34)
index - 0.27% (34)
request - 0.26% (33)
ever - 0.26% (33)
now - 0.26% (33)
era - 0.26% (32)
copy - 0.26% (32)
replication - 0.25% (31)
your - 0.25% (31)
sed - 0.25% (31)
because - 0.25% (31)
which - 0.24% (30)
where - 0.24% (30)
was - 0.24% (30)
what - 0.24% (30)
them - 0.23% (29)
new - 0.23% (29)
event - 0.23% (29)
back - 0.23% (29)
gv$ - 0.23% (29)
direct - 0.22% (28)
requests - 0.22% (28)
she - 0.22% (28)
....... - 0.22% (27)
rate - 0.22% (27)
connect - 0.22% (27)
dbjobrequests - 0.22% (27)
follow - 0.21% (26)
total - 0.21% (26)
problem - 0.21% (26)
output - 0.21% (26)
end - 0.21% (26)
dev - 0.21% (26)
bulk - 0.21% (26)
system - 0.21% (26)
exec - 0.21% (26)
make - 0.21% (26)
two - 0.21% (26)
create - 0.2% (25)
like - 0.2% (25)
ran - 0.2% (25)
appuser_owner.dbjobrequests - 0.2% (25)
but - 0.2% (25)
example - 0.2% (25)
----------- - 0.2% (25)
find - 0.2% (25)
service - 0.2% (25)
any - 0.2% (25)
block - 0.19% (24)
into - 0.19% (24)
text - 0.19% (24)
category - 0.19% (24)
big - 0.19% (24)
........ - 0.19% (24)
some - 0.18% (23)
2015 - 0.18% (23)
tkprof - 0.18% (23)
has - 0.18% (23)
need - 0.18% (23)
top - 0.18% (23)
head - 0.18% (23)
these - 0.18% (23)
too - 0.18% (22)
line - 0.18% (22)
during - 0.18% (22)
migration - 0.18% (22)
add - 0.18% (22)
develop - 0.18% (22)
read - 0.17% (21)
way - 0.17% (21)
node - 0.17% (21)
after - 0.17% (21)
monitor - 0.17% (21)
on. - 0.17% (21)
row - 0.17% (21)
mac - 0.17% (21)
ash - 0.17% (21)
point - 0.17% (21)
space - 0.16% (20)
group - 0.16% (20)
sysdba - 0.16% (20)
thing - 0.16% (20)
only - 0.16% (20)
support - 0.16% (20)
source - 0.16% (20)
down - 0.16% (20)
notes - 0.16% (20)
above - 0.16% (20)
wait - 0.16% (20)
scripts - 0.15% (19)
write - 0.15% (19)
sure - 0.15% (19)
dec - 0.15% (19)
match - 0.15% (19)
last - 0.15% (19)
simply - 0.15% (19)
status - 0.15% (19)
most - 0.15% (19)
live - 0.14% (18)
pin - 0.14% (18)
every - 0.14% (18)
perform - 0.14% (18)
also - 0.14% (18)
start - 0.14% (18)
here’s - 0.14% (18)
part - 0.14% (18)
inst_id - 0.14% (18)
full - 0.14% (18)
directly - 0.14% (18)
pid - 0.14% (18)
server - 0.14% (18)
serial - 0.14% (18)
team - 0.14% (18)
main - 0.14% (18)
test - 0.14% (17)
doing - 0.14% (17)
state - 0.14% (17)
client1 - 0.14% (17)
list - 0.14% (17)
mogilevskiy - 0.14% (17)
prod - 0.14% (17)
instead - 0.14% (17)
merge - 0.14% (17)
target - 0.14% (17)
each - 0.14% (17)
grep - 0.14% (17)
bin - 0.14% (17)
monitoring - 0.14% (17)
step - 0.14% (17)
├── - 0.13% (16)
take - 0.13% (16)
username - 0.13% (16)
give - 0.13% (16)
storage - 0.13% (16)
day - 0.13% (16)
eof - 0.13% (16)
just - 0.13% (16)
long - 0.13% (16)
shell - 0.13% (16)
simple - 0.13% (16)
case - 0.13% (16)
other - 0.13% (16)
tool - 0.13% (16)
locks - 0.13% (16)
tick - 0.13% (16)
machine - 0.12% (15)
select - 0.12% (15)
save - 0.12% (15)
keep - 0.12% (15)
action - 0.12% (15)
word - 0.12% (15)
should - 0.12% (15)
install - 0.12% (15)
old - 0.12% (15)
......... - 0.12% (15)
post - 0.12% (15)
see - 0.12% (15)
next - 0.12% (15)
database. - 0.12% (15)
option - 0.12% (15)
real - 0.12% (15)
mount - 0.12% (15)
execute - 0.11% (14)
shot - 0.11% (14)
cluster - 0.11% (14)
web - 0.11% (14)
would - 0.11% (14)
issue - 0.11% (14)
level - 0.11% (14)
operation - 0.11% (14)
off - 0.11% (14)
running - 0.11% (14)
following - 0.11% (14)
check - 0.11% (14)
hand - 0.11% (14)
command - 0.11% (14)
client_id - 0.1% (13)
path - 0.1% (13)
x_support - 0.1% (13)
info - 0.1% (13)
count - 0.1% (13)
report - 0.1% (13)
date - 0.1% (13)
serial# - 0.1% (13)
pass - 0.1% (13)
disks - 0.1% (13)
second - 0.1% (13)
load - 0.1% (13)
snapshot - 0.1% (13)
v$lock - 0.1% (13)
final - 0.1% (13)
same - 0.1% (12)
join - 0.1% (12)
/nolog - 0.1% (12)
project_c - 0.1% (12)
quick - 0.1% (12)
search - 0.1% (12)
share - 0.1% (12)
used - 0.1% (12)
turn - 0.1% (12)
prev - 0.1% (12)
extract - 0.1% (12)
batch - 0.1% (12)
number - 0.1% (12)
instance - 0.1% (12)
lets - 0.1% (12)
under - 0.1% (12)
auto - 0.1% (12)
plain - 0.1% (12)
done - 0.09% (11)
daily - 0.09% (11)
files - 0.09% (11)
rman - 0.09% (11)
help - 0.09% (11)
awr - 0.09% (11)
operations - 0.09% (11)
blocked - 0.09% (11)
easy - 0.09% (11)
transform - 0.09% (11)
vitaliy - 0.09% (11)
follows - 0.09% (11)
tail - 0.09% (11)
say - 0.09% (11)
sign - 0.09% (11)
partition - 0.09% (11)
rows - 0.09% (11)
mve - 0.09% (11)
build - 0.09% (11)
testgrp - 0.09% (11)
fully - 0.09% (11)
they - 0.09% (11)
lasmdsk.sh - 0.09% (11)
sit - 0.09% (11)
know - 0.09% (11)
minute - 0.09% (11)
finally - 0.08% (10)
backfill - 0.08% (10)
sessions - 0.08% (10)
were - 0.08% (10)
easier - 0.08% (10)
first - 0.08% (10)
query - 0.08% (10)
show - 0.08% (10)
wrap - 0.08% (10)
gv$lock - 0.08% (10)
did - 0.08% (10)
through - 0.08% (10)
minutes - 0.08% (10)
function - 0.08% (10)
ready - 0.08% (10)
place - 0.08% (10)
reason - 0.08% (10)
does - 0.08% (10)
folder - 0.08% (10)
more - 0.08% (10)
downtime - 0.08% (10)
performance - 0.08% (10)
ways - 0.08% (10)
hit - 0.08% (10)
shared - 0.08% (10)
posted - 0.08% (10)
lines - 0.08% (10)
in: - 0.08% (10)
webiv - 0.08% (10)
careful - 0.07% (9)
store - 0.07% (9)
elapsed - 0.07% (9)
once - 0.07% (9)
tags: - 0.07% (9)
rebuild - 0.07% (9)
article - 0.07% (9)
mysql - 0.07% (9)
system. - 0.07% (9)
parallel - 0.07% (9)
side - 0.07% (9)
result - 0.07% (9)
plan - 0.07% (9)
tables - 0.07% (9)
ask - 0.07% (9)
follows: - 0.07% (9)
oinstall - 0.07% (9)
think - 0.07% (9)
method - 0.07% (9)
print - 0.07% (9)
possible - 0.07% (9)
v$session - 0.07% (9)
alter - 0.07% (9)
who - 0.07% (9)
password - 0.07% (9)
queue - 0.07% (9)
connection - 0.07% (9)
that’s - 0.07% (9)
called - 0.07% (9)
switch - 0.07% (9)
works - 0.07% (9)
ensure - 0.07% (9)
right - 0.07% (9)
overlap - 0.07% (9)
client2 - 0.07% (9)
complete - 0.07% (9)
much - 0.07% (9)
x_category_v - 0.07% (9)
me. - 0.07% (9)
found - 0.07% (9)
quickly - 0.07% (9)
allows - 0.06% (8)
production - 0.06% (8)
gv$asm_disk - 0.06% (8)
matches - 0.06% (8)
type - 0.06% (8)
mysql_script - 0.06% (8)
it. - 0.06% (8)
variable - 0.06% (8)
team. - 0.06% (8)
mode - 0.06% (8)
structure - 0.06% (8)
stats - 0.06% (8)
something - 0.06% (8)
than - 0.06% (8)
seconds - 0.06% (8)
rehearsal - 0.06% (8)
dual-write - 0.06% (8)
technique - 0.06% (8)
while - 0.06% (8)
fast - 0.06% (8)
actual - 0.06% (8)
staging - 0.06% (8)
script. - 0.06% (8)
oradebug - 0.06% (8)
dump - 0.06% (8)
-rw-r----- - 0.06% (8)
i’ll - 0.06% (8)
developer - 0.06% (8)
look - 0.06% (8)
sid/serial - 0.06% (8)
transformation - 0.06% (8)
answer - 0.06% (8)
spid - 0.06% (8)
legacy - 0.06% (8)
generate - 0.06% (8)
hop - 0.06% (8)
dbms_system - 0.06% (8)
san - 0.06% (8)
parameter - 0.06% (8)
consistent - 0.06% (8)
question - 0.06% (8)
2000 - 0.06% (8)
solve - 0.06% (8)
special - 0.06% (7)
cross - 0.06% (7)
dml - 0.06% (7)
tell - 0.06% (7)
sample - 0.06% (7)
http - 0.06% (7)
term - 0.06% (7)
└── - 0.06% (7)
asmdisk - 0.06% (7)
explain - 0.06% (7)
note: - 0.06% (7)
view - 0.06% (7)
get_asm - 0.06% (7)
“backfill” - 0.06% (7)
lab - 0.06% (7)
osuser - 0.06% (7)
don’t - 0.06% (7)
addr - 0.06% (7)
tablespace - 0.06% (7)
diskgroup - 0.06% (7)
login - 0.06% (7)
----------------------------------- - 0.06% (7)
oracle_home - 0.06% (7)
information - 0.06% (7)
previous - 0.06% (7)
value - 0.06% (7)
created - 0.06% (7)
had - 0.06% (7)
dbt - 0.06% (7)
exit - 0.06% (7)
filter - 0.06% (7)
historical - 0.06% (7)
<insert - 0.06% (7)
subscribe - 0.06% (7)
single - 0.06% (7)
via - 0.06% (7)
before - 0.06% (7)
based - 0.06% (7)
validation - 0.06% (7)
writes - 0.06% (7)
process. - 0.06% (7)
good - 0.06% (7)
solving - 0.06% (7)
high - 0.06% (7)
services - 0.06% (7)
soon - 0.06% (7)
phase - 0.06% (7)
strategy - 0.06% (7)
adding - 0.06% (7)
such - 0.06% (7)
however - 0.06% (7)
below - 0.05% (6)
operations, - 0.05% (6)
open - 0.05% (6)
always - 0.05% (6)
0.00 - 0.05% (6)
sets - 0.05% (6)
vmogilevskiy - 0.05% (6)
repdb_ora_32262.out - 0.05% (6)
contain - 0.05% (6)
upda - 0.05% (6)
null - 0.05% (6)
still - 0.05% (6)
hash - 0.05% (6)
1234 - 0.05% (6)
access - 0.05% (6)
luns - 0.05% (6)
mailing - 0.05% (6)
import - 0.05% (6)
custom - 0.05% (6)
site - 0.05% (6)
flow - 0.05% (6)
email - 0.05% (6)
gv$session - 0.05% (6)
header_status - 0.05% (6)
issues - 0.05% (6)
ids - 0.05% (6)
./lasmdsk.sh - 0.05% (6)
inst_id, - 0.05% (6)
locks.sql - 0.05% (6)
developed - 0.05% (6)
environment - 0.05% (6)
prompt - 0.05% (6)
already - 0.05% (6)
fact - 0.05% (6)
eventorex - 0.05% (6)
diagnostic - 0.05% (6)
(no - 0.05% (6)
tns_alias - 0.05% (6)
takes - 0.05% (6)
dbms_system. - 0.05% (6)
taking - 0.05% (6)
9999 - 0.05% (6)
year - 0.05% (6)
specific - 0.05% (6)
record - 0.05% (6)
happen - 0.05% (6)
could - 0.05% (6)
repeat - 0.05% (6)
topas - 0.05% (6)
client1/proj/x_category_v - 0.05% (6)
personal - 0.05% (6)
directory - 0.05% (6)
across - 0.05% (6)
x_category_i - 0.05% (6)
exact - 0.05% (6)
time. - 0.05% (6)
reload - 0.05% (6)
seconds_in_wait - 0.05% (6)
trunc - 0.05% (6)
client_identifier - 0.05% (6)
instrument - 0.05% (6)
sid/serial# - 0.05% (6)
enable - 0.05% (6)
either - 0.05% (6)
details - 0.05% (6)
amazon - 0.05% (6)
rds - 0.05% (6)
their - 0.05% (6)
move - 0.05% (6)
later - 0.05% (6)
schema - 0.05% (6)
"x" - 0.05% (6)
snippet - 0.05% (6)
diagnostics - 0.04% (5)
performing - 0.04% (5)
dependent - 0.04% (5)
drop - 0.04% (5)
large - 0.04% (5)
december - 0.04% (5)
begin - 0.04% (5)
gain - 0.04% (5)
order - 0.04% (5)
tells - 0.04% (5)
active - 0.04% (5)
core - 0.04% (5)
parameters - 0.04% (5)
contents - 0.04% (5)
alert - 0.04% (5)
doesn’t - 0.04% (5)
newsletter - 0.04% (5)
clear - 0.04% (5)
define - 0.04% (5)
valuable - 0.04% (5)
between - 0.04% (5)
session. - 0.04% (5)
makes - 0.04% (5)
basic - 0.04% (5)
waits - 0.04% (5)
root - 0.04% (5)
times - 0.04% (5)
i’ve - 0.04% (5)
segments - 0.04% (5)
msid.sh - 0.04% (5)
oracle@dwdb01~ - 0.04% (5)
append - 0.04% (5)
really - 0.04% (5)
repdb_ora_32262.trc - 0.04% (5)
summary - 0.04% (5)
it! - 0.04% (5)
800 - 0.04% (5)
short - 0.04% (5)
cover - 0.04% (5)
procedure - 0.04% (5)
gets - 0.04% (5)
typical - 0.04% (5)
scripting - 0.04% (5)
often - 0.04% (5)
trims - 0.04% (5)
egrep - 0.04% (5)
everything - 0.04% (5)
sniff - 0.04% (5)
accomplish - 0.04% (5)
might - 0.04% (5)
going - 0.04% (5)
carefully - 0.04% (5)
else - 0.04% (5)
key - 0.04% (5)
version - 0.04% (5)
instrumentation - 0.04% (5)
words - 0.04% (5)
model - 0.04% (5)
rcdml - 0.04% (5)
solution - 0.04% (5)
apply - 0.04% (5)
small - 0.04% (5)
month - 0.04% (5)
is: - 0.04% (5)
workers - 0.04% (5)
making - 0.04% (5)
client/category - 0.04% (5)
fail - 0.04% (5)
similar - 0.04% (5)
years - 0.04% (5)
elapsed: - 0.04% (5)
systems - 0.04% (5)
matching - 0.04% (5)
actually - 0.04% (5)
bolt - 0.04% (5)
pattern - 0.04% (5)
since - 0.04% (5)
let’s - 0.04% (5)
duplicate - 0.04% (5)
disk. - 0.04% (5)
came - 0.04% (5)
2016 - 0.04% (5)
“dual-write” - 0.04% (5)
maintenance - 0.04% (5)
config - 0.04% (5)
took - 0.04% (5)
paste - 0.04% (5)
provision - 0.04% (5)
wget - 0.04% (5)
schedule - 0.04% (5)
 i - 0.04% (5)
generated - 0.04% (5)
locking - 0.04% (5)
three - 0.04% (5)
jul - 0.04% (5)
indexing - 0.04% (5)
hour - 0.04% (5)
run_id - 0.03% (4)
 a - 0.03% (4)
data. - 0.03% (4)
belong - 0.03% (4)
internal - 0.03% (4)
/tmp/msid.sh - 0.03% (4)
of. - 0.03% (4)
amount - 0.03% (4)
code - 0.03% (4)
oem - 0.03% (4)
i/o - 0.03% (4)
event, - 0.03% (4)
important - 0.03% (4)
bring - 0.03% (4)
available - 0.03% (4)
apps - 0.03% (4)
bolt-on - 0.03% (4)
counts - 0.03% (4)
project_b - 0.03% (4)
oratab - 0.03% (4)
them. - 0.03% (4)
maintain - 0.03% (4)
profiling - 0.03% (4)
staff - 0.03% (4)
-rw-r--r--@ - 0.03% (4)
ticket_2236 - 0.03% (4)
mirror - 0.03% (4)
repeatable - 0.03% (4)
great - 0.03% (4)
project_a - 0.03% (4)
developers - 0.03% (4)
culprit - 0.03% (4)
fix - 0.03% (4)
pages - 0.03% (4)
blocks - 0.03% (4)
searching - 0.03% (4)
merge. - 0.03% (4)
best - 0.03% (4)
maction - 0.03% (4)
a10 - 0.03% (4)
heading - 0.03% (4)
needs - 0.03% (4)
ago - 0.03% (4)
sql_text - 0.03% (4)
impact - 0.03% (4)
{print - 0.03% (4)
size - 0.03% (4)
dba_hist_seg_stat - 0.03% (4)
echo - 0.03% (4)
working - 0.03% (4)
organization - 0.03% (4)
community - 0.03% (4)
wrong - 0.03% (4)
username/password - 0.03% (4)
handle - 0.03% (4)
extremely - 0.03% (4)
verify - 0.03% (4)
dismount - 0.03% (4)
awk - 0.03% (4)
mntgrp.sh - 0.03% (4)
nodes - 0.03% (4)
display - 0.03% (4)
infrastructure - 0.03% (4)
remote - 0.03% (4)
export - 0.03% (4)
never - 0.03% (4)
wrapper - 0.03% (4)
“_105|_106|_107|_108” - 0.03% (4)
exec_sql.sh - 0.03% (4)
provisioned - 0.03% (4)
utilize - 0.03% (4)
common - 0.03% (4)
instances - 0.03% (4)
development - 0.03% (4)
aware - 0.03% (4)
oracle_sid - 0.03% (4)
again - 0.03% (4)
strain - 0.03% (4)
set_ev - 0.03% (4)
… - 0.03% (4)
indexed - 0.03% (4)
account - 0.03% (4)
resolve - 0.03% (4)
together - 0.03% (4)
ensures - 0.03% (4)
this_.status - 0.03% (4)
udump - 0.03% (4)
this_.workrequestid - 0.03% (4)
turned - 0.03% (4)
tools - 0.03% (4)
gm424t8fyx3w6 - 0.03% (4)
keeping - 0.03% (4)
3084 - 0.03% (4)
column - 0.03% (4)
fit - 0.03% (4)
prefer - 0.03% (4)
file. - 0.03% (4)
installed - 0.03% (4)
useful - 0.03% (4)
blocking - 0.03% (4)
pipe - 0.03% (4)
techniques - 0.03% (4)
current - 0.03% (4)
pick - 0.03% (4)
185.22 - 0.03% (4)
objects - 0.03% (4)
many - 0.03% (4)
eharmony - 0.03% (4)
dbas - 0.03% (4)
selected. - 0.03% (4)
focus - 0.03% (4)
identify - 0.03% (4)
try - 0.03% (4)
analyze - 0.03% (4)
another - 0.03% (4)
gather - 0.03% (4)
copy/paste - 0.03% (4)
seen - 0.03% (4)
finished - 0.03% (4)
hours - 0.03% (4)
primary - 0.03% (4)
come - 0.03% (4)
left - 0.03% (4)
built - 0.03% (4)
beyond - 0.03% (4)
early - 0.03% (4)
commands - 0.03% (4)
whole - 0.03% (4)
cluster. - 0.03% (4)
transaction - 0.03% (4)
update - 0.03% (4)
progress - 0.03% (4)
few - 0.03% (4)
change - 0.03% (4)
you’ll - 0.03% (4)
autonomous - 0.03% (4)
utility - 0.03% (4)
face - 0.03% (4)
rules - 0.03% (4)
run-doc - 0.03% (4)
light - 0.03% (4)
online - 0.03% (4)
poorly - 0.03% (4)
continue - 0.03% (4)
databases - 0.03% (4)
want - 0.03% (4)
helpful - 0.03% (4)
finally, - 0.03% (4)
names - 0.03% (4)
having - 0.03% (4)
section - 0.03% (4)
faster - 0.03% (4)
giving - 0.03% (4)
consuming - 0.03% (4)
options - 0.03% (4)
typically - 0.03% (4)
slow - 0.03% (4)
clause - 0.03% (4)
task - 0.03% (4)
today - 0.03% (4)
duplicates - 0.03% (4)
cpu - 0.03% (4)
power - 0.03% (4)
waiting - 0.03% (4)
process, - 0.02% (3)
inside - 0.02% (3)
results - 0.02% (3)
“run-doc” - 0.02% (3)
create_date - 0.02% (3)
placed - 0.02% (3)
reference - 0.02% (3)
'{print - 0.02% (3)
happens - 0.02% (3)
parse - 0.02% (3)
review - 0.02% (3)
components - 0.02% (3)
$nf} - 0.02% (3)
9:30am - 0.02% (3)
why - 0.02% (3)
statements - 0.02% (3)
exactly - 0.02% (3)
topic - 0.02% (3)
phase. - 0.02% (3)
sampling - 0.02% (3)
gathered - 0.02% (3)
time, - 0.02% (3)
watch - 0.02% (3)
spid, - 0.02% (3)
stop - 0.02% (3)
linux - 0.02% (3)
further - 0.02% (3)
methods - 0.02% (3)
i’d - 0.02% (3)
armed - 0.02% (3)
0.02 - 0.02% (3)
white - 0.02% (3)
steps - 0.02% (3)
wrote - 0.02% (3)
contains - 0.02% (3)
benefit - 0.02% (3)
tracefile_name - 0.02% (3)
@h1 - 0.02% (3)
0700 - 0.02% (3)
systemstate - 0.02% (3)
0930 - 0.02% (3)
days - 0.02% (3)
timing - 0.02% (3)
-ef - 0.02% (3)
contents: - 0.02% (3)
it: - 0.02% (3)
hope - 0.02% (3)
terminal - 0.02% (3)
frame - 0.02% (3)
moved - 0.02% (3)
132 - 0.02% (3)
receive - 0.02% (3)
asked - 0.02% (3)
billion - 0.02% (3)
automatically - 0.02% (3)
sys.obj$ - 0.02% (3)
questions - 0.02% (3)
double - 0.02% (3)
429496729 - 0.02% (3)
3135 - 0.02% (3)
made - 0.02% (3)
independent - 0.02% (3)
data_107 - 0.02% (3)
switched - 0.02% (3)
we’d - 0.02% (3)
instance........ - 0.02% (3)
............ - 0.02% (3)
deep - 0.02% (3)
free_mb - 0.02% (3)
sysasm - 0.02% (3)
{2..6} - 0.02% (3)
 the - 0.02% (3)
reach - 0.02% (3)
awr, - 0.02% (3)
segs2.sql - 0.02% (3)
segs3.sql - 0.02% (3)
went - 0.02% (3)
22, - 0.02% (3)
pump - 0.02% (3)
private - 0.02% (3)
ssh - 0.02% (3)
beta - 0.02% (3)
things - 0.02% (3)
string - 0.02% (3)
testing - 0.02% (3)
assign - 0.02% (3)
about - 0.02% (3)
racdb0${x} - 0.02% (3)
app1user_name - 0.02% (3)
displayed - 0.02% (3)
believe - 0.02% (3)
third - 0.02% (3)
defined - 0.02% (3)
row_number() - 0.02% (3)
instead, - 0.02% (3)
for: - 0.02% (3)
mysql_script.hn - 0.02% (3)
sql) - 0.02% (3)
saved - 0.02% (3)
specifically - 0.02% (3)
name, - 0.02% (3)
tool. - 0.02% (3)
inst_id|^-- - 0.02% (3)
given - 0.02% (3)
fact, - 0.02% (3)
effect - 0.02% (3)
hands - 0.02% (3)
release - 0.02% (3)
developing - 0.02% (3)
january - 0.02% (3)
however, - 0.02% (3)
platforms - 0.02% (3)
separate - 0.02% (3)
life - 0.02% (3)
scale - 0.02% (3)
jdbc - 0.02% (3)
multiple - 0.02% (3)
microservices - 0.02% (3)
deal - 0.02% (3)
---------------------------------------------------------------------- - 0.02% (3)
consumption - 0.02% (3)
started - 0.02% (3)
worked - 0.02% (3)
below: - 0.02% (3)
articles - 0.02% (3)
later. - 0.02% (3)
wonder - 0.02% (3)
executing - 0.02% (3)
386 - 0.02% (3)
bbedit - 0.02% (3)
points - 0.02% (3)
stream - 0.02% (3)
clients - 0.02% (3)
users - 0.02% (3)
client/category/project - 0.02% (3)
300 - 0.02% (3)
touch - 0.02% (3)
isolate - 0.02% (3)
better - 0.02% (3)
(see - 0.02% (3)
readable - 0.02% (3)
editor - 0.02% (3)
capable - 0.02% (3)
transfer - 0.02% (3)
syntax - 0.02% (3)
markdown - 0.02% (3)
exit; - 0.02% (3)
logs - 0.02% (3)
3927 - 0.02% (3)
message - 0.02% (3)
grant - 0.02% (3)
headlines - 0.02% (3)
bad - 0.02% (3)
constrain - 0.02% (3)
cycle - 0.02% (3)
example, - 0.02% (3)
sync - 0.02% (3)
privilege - 0.02% (3)
different - 0.02% (3)
v$process - 0.02% (3)
clearly - 0.02% (3)
well - 0.02% (3)
unless - 0.02% (3)
client1/proj/x_category_v/project_c - 0.02% (3)
use-case - 0.02% (3)
shows - 0.02% (3)
repository - 0.02% (3)
hashjoin - 0.02% (3)
environment. - 0.02% (3)
job. - 0.02% (3)
a30 - 0.02% (3)
accomplished - 0.02% (3)
s.machine - 0.02% (3)
creating - 0.02% (3)
a12 - 0.02% (3)
samples - 0.02% (3)
pointing - 0.02% (3)
produce - 0.02% (3)
a20 - 0.02% (3)
logon_time - 0.02% (3)
99999 - 0.02% (3)
a15 - 0.02% (3)
being - 0.02% (3)
configuration - 0.02% (3)
lot - 0.02% (3)
attach - 0.02% (3)
optional - 0.02% (3)
s.paddr - 0.02% (3)
orasniff - 0.02% (3)
week - 0.02% (3)
network - 0.02% (3)
setup - 0.02% (3)
holds - 0.02% (3)
this: - 0.02% (3)
server. - 0.02% (3)
written - 0.02% (3)
copy, - 0.02% (3)
getting - 0.02% (3)
begins - 0.02% (3)
principles - 0.02% (3)
applying - 0.02% (3)
(note: - 0.02% (3)
finding - 0.02% (3)
30k - 0.02% (3)
obvious - 0.02% (3)
porting - 0.02% (3)
fundamental - 0.02% (3)
one. - 0.02% (3)
schedule. - 0.02% (3)
service, - 0.02% (3)
almost - 0.02% (3)
dig - 0.02% (3)
raw - 0.02% (3)
pause - 0.02% (3)
present - 0.02% (3)
“legacy” - 0.02% (3)
dedicated - 0.02% (3)
tend - 0.02% (3)
what’s - 0.02% (3)
hand, - 0.02% (3)
without - 0.02% (3)
usually - 0.02% (3)
framework - 0.02% (2)
analyzed - 0.02% (2)
workrequ1_1_0_, - 0.02% (2)
event3_1_0_, - 0.02% (2)
discussion - 0.02% (2)
this_.event_type - 0.02% (2)
considering - 0.02% (2)
metrics - 0.02% (2)
basis - 0.02% (2)
createtime1_0_, - 0.02% (2)
“big - 0.02% (2)
script: - 0.02% (2)
rotate - 0.02% (2)
far - 0.02% (2)
schema. - 0.02% (2)
grew - 0.02% (2)
oracle’s - 0.02% (2)
category. - 0.02% (2)
interested - 0.02% (2)
(over - 0.02% (2)
this_.createtime - 0.02% (2)
storage. - 0.02% (2)
checks - 0.02% (2)
"blocked - 0.02% (2)
standalone - 0.02% (2)
customer - 0.02% (2)
curl - 0.02% (2)
of: - 0.02% (2)
@locks.sql - 0.02% (2)
https://s3.amazonaws.com/mve-shared/locks.sql - 0.02% (2)
day. - 0.02% (2)
talk - 0.02% (2)
min_blocked - 0.02% (2)
series - 0.02% (2)
16: - 0.02% (2)
ahead - 0.02% (2)
hashjoin.com - 0.02% (2)
it’ll - 0.02% (2)
8.9 - 0.02% (2)
conforming - 0.02% (2)
preserve - 0.02% (2)
window - 0.02% (2)
guarantee - 0.02% (2)
“as-of” - 0.02% (2)
resources - 0.02% (2)
memory - 0.02% (2)
dba. - 0.02% (2)
project_c_work.txt - 0.02% (2)
spam - 0.02% (2)
local - 0.02% (2)
detect - 0.02% (2)
gv$sqltext - 0.02% (2)
it!): - 0.02% (2)
3485 - 0.02% (2)
placing - 0.02% (2)
job: - 0.02% (2)
doubled - 0.02% (2)
interesting - 0.02% (2)
result. - 0.02% (2)
status1_0_, - 0.02% (2)
compare - 0.02% (2)
webiv? - 0.02% (2)
engine - 0.02% (2)
immediately - 0.02% (2)
noticed - 0.02% (2)
solving. - 0.02% (2)
architect - 0.02% (2)
especially - 0.02% (2)
netflix - 0.02% (2)
quickly. - 0.02% (2)
profile - 0.02% (2)
history - 0.02% (2)
allowing - 0.02% (2)
(in - 0.02% (2)
cassandra - 0.02% (2)
organize - 0.02% (2)
spot - 0.02% (2)
all] - 0.02% (2)
relevant - 0.02% (2)
project. - 0.02% (2)
hhmi - 0.02% (2)
evernote - 0.02% (2)
scale. - 0.02% (2)
reports - 0.02% (2)
whitespace - 0.02% (2)
user’s - 0.02% (2)
metalink - 0.02% (2)
replicated - 0.02% (2)
peek - 0.02% (2)
crafted - 0.02% (2)
forward - 0.02% (2)
interface - 0.02% (2)
sentimental - 0.02% (2)
overwrites - 0.02% (2)
ash, - 0.02% (2)
sound - 0.02% (2)
suptool: - 0.02% (2)
problem. - 0.02% (2)
switching - 0.02% (2)
conclusion - 0.02% (2)
solid - 0.02% (2)
h1.sql - 0.02% (2)
upgrade - 0.02% (2)
thresholds - 0.02% (2)
son. - 0.02% (2)
got - 0.02% (2)
1/10th - 0.02% (2)
dealing - 0.02% (2)
processing - 0.02% (2)
tweets - 0.02% (2)
this_.userid - 0.02% (2)
challenges - 0.02% (2)
obviously - 0.02% (2)
ticket_2235 - 0.02% (2)
little - 0.02% (2)
small. - 0.02% (2)
did, - 0.02% (2)
expensive - 0.02% (2)
eventually - 0.02% (2)
against - 0.02% (2)
self-join - 0.02% (2)
client1x_x_category_i.txt - 0.02% (2)
629vx81ykvhpp - 0.02% (2)
outside - 0.02% (2)
blocker - 0.02% (2)
underlying - 0.02% (2)
this_.status=:2 - 0.02% (2)
client1x_x_support.txt - 0.02% (2)
client2x_x_category_v.txt - 0.02% (2)
client2x_x_support.txt - 0.02% (2)
client2/proj/x_support/ticket_2236 - 0.02% (2)
presented - 0.02% (2)
performed - 0.02% (2)
userid1_0_ - 0.02% (2)
ticket_2234 - 0.02% (2)
consists - 0.02% (2)
problems - 0.02% (2)
idea - 0.02% (2)
application - 0.02% (2)
changes - 0.02% (2)
reported - 0.02% (2)
failure - 0.02% (2)
troubleshoot - 0.02% (2)
prior - 0.02% (2)
searchable - 0.02% (2)
twitter - 0.02% (2)
projects - 0.02% (2)
becomes - 0.02% (2)
issues. - 0.02% (2)
ticket_1236 - 0.02% (2)
generic - 0.02% (2)
providing - 0.02% (2)
datastore. - 0.02% (2)
deploy - 0.02% (2)
briefly - 0.02% (2)
functions - 0.02% (2)
purposes - 0.02% (2)
ticket_1234 - 0.02% (2)
ground - 0.02% (2)
ticket_1235 - 0.02% (2)
errors - 0.02% (2)
site, - 0.02% (2)
dbms_workload_repository.create_snapshot - 0.02% (2)
manage - 0.02% (2)
shortened - 0.02% (2)
iops - 0.02% (2)
matter - 0.02% (2)
file: - 0.02% (2)
parameters: - 0.02% (2)
(the - 0.02% (2)
command: - 0.02% (2)
00:23 - 0.02% (2)
1296 - 0.02% (2)
00:30 - 0.02% (2)
moving - 0.02% (2)
within - 0.02% (2)
-lta - 0.02% (2)
close - 0.02% (2)
modified - 0.02% (2)
keeps - 0.02% (2)
flip - 0.02% (2)
baseline - 0.02% (2)
use. - 0.02% (2)
yet - 0.02% (2)
vmogilevskiy; - 0.02% (2)
mission - 0.02% (2)
approach - 0.02% (2)
events - 0.02% (2)
broker - 0.02% (2)
objective - 0.02% (2)
afterwords - 0.02% (2)
columns - 0.02% (2)
"total" - 0.02% (2)
review. - 0.02% (2)
send - 0.02% (2)
recommend - 0.02% (2)
terminal. - 0.02% (2)
but, - 0.02% (2)
values - 0.02% (2)
tnsnames.ora - 0.02% (2)
statistics - 0.02% (2)
unix - 0.02% (2)
username@tns_alias - 0.02% (2)
complete. - 0.02% (2)
copy? - 0.02% (2)
automate - 0.02% (2)
happy - 0.02% (2)
covered - 0.02% (2)
matches: - 0.02% (2)
binary - 0.02% (2)
23046176 - 0.02% (2)
286.44 - 0.02% (2)
122 - 0.02% (2)
loop - 0.02% (2)
0.05 - 0.02% (2)
0.31 - 0.02% (2)
4000 - 0.02% (2)
0.66 - 0.02% (2)
efficient - 0.02% (2)
0.16 - 0.02% (2)
2001 - 0.02% (2)
related - 0.02% (2)
8002 - 0.02% (2)
backlog - 0.02% (2)
human - 0.02% (2)
handled - 0.02% (2)
saying - 0.02% (2)
s.sid,s.serial#,s.username, - 0.02% (2)
finger - 0.02% (2)
example. - 0.02% (2)
999999 - 0.02% (2)
us. - 0.02% (2)
lived - 0.02% (2)
busy - 0.02% (2)
require - 0.02% (2)
database, - 0.02% (2)
monitoring. - 0.02% (2)
for. - 0.02% (2)
reload. - 0.02% (2)
extracted - 0.02% (2)
gone - 0.02% (2)
s.module||' - 0.02% (2)
rac, - 0.02% (2)
aix - 0.02% (2)
jobs - 0.02% (2)
fresh - 0.02% (2)
setting - 0.02% (2)
available. - 0.02% (2)
independently - 0.02% (2)
-10 - 0.02% (2)
then, - 0.02% (2)
enough - 0.02% (2)
otherwise - 0.02% (2)
sql). - 0.02% (2)
cases - 0.02% (2)
p.spid, - 0.02% (2)
'||s.action - 0.02% (2)
calls - 0.02% (2)
s.machine,s.process, - 0.02% (2)
you’ve - 0.02% (2)
figured - 0.02% (2)
enabling - 0.02% (2)
subject - 0.02% (2)
tkprof. - 0.02% (2)
now, - 0.02% (2)
go-live - 0.02% (2)
completely - 0.02% (2)
dbms_session.set_identifier - 0.02% (2)
gives - 0.02% (2)
100% - 0.02% (2)
s.inst_id - 0.02% (2)
p.addr - 0.02% (2)
logon_time, - 0.02% (2)
/tmp/pid.sql - 0.02% (2)
hh24:mi') - 0.02% (2)
s.status,s.osuser, - 0.02% (2)
balance - 0.02% (2)
utilization - 0.02% (2)
response - 0.02% (2)
goal - 0.02% (2)
is, - 0.02% (2)
haven’t - 0.02% (2)
#!/bin/ksh - 0.02% (2)
submitted - 0.02% (2)
processes - 0.02% (2)
utilized - 0.02% (2)
belongs - 0.02% (2)
secure - 0.02% (2)
textexpander - 0.02% (2)
decision - 0.02% (2)
gv$asm_diskgroup - 0.02% (2)
filtering - 0.02% (2)
easy! - 0.02% (2)
needle - 0.02% (2)
instead: - 0.02% (2)
operator - 0.02% (2)
problem? - 0.02% (2)
dispatch - 0.02% (2)
shut - 0.02% (2)
easily - 0.02% (2)
oracle: - 0.02% (2)
shop - 0.02% (2)
breaking - 0.02% (2)
big_data, - 0.02% (2)
pmon - 0.02% (2)
joining - 0.02% (2)
total_mb - 0.02% (2)
rebal - 0.02% (2)
must - 0.02% (2)
variables - 0.02% (2)
sort - 0.02% (2)
/etc/oratab - 0.02% (2)
completion - 0.02% (2)
following: - 0.02% (2)
temp - 0.02% (2)
groups - 0.02% (2)
space. - 0.02% (2)
physical - 0.02% (2)
lags - 0.02% (2)
$nf}' - 0.02% (2)
added - 0.02% (2)
prod_data1 - 0.02% (2)
dba_hist_seg_stat_obj - 0.02% (2)
creation - 0.02% (2)
datafile - 0.02% (2)
extending - 0.02% (2)
workflow - 0.02% (2)
ago. - 0.02% (2)
2014 - 0.02% (2)
 but - 0.02% (2)
ask. - 0.02% (2)
member - 0.02% (2)
disks) - 0.02% (2)
disk’s - 0.02% (2)
you’d - 0.02% (2)
lasmdsk.sh: - 0.02% (2)
${asmid} - 0.02% (2)
taken - 0.02% (2)
newly - 0.02% (2)
none - 0.02% (2)
mounted - 0.02% (2)
/oracle/dba/bin/mntgrp.sh - 0.02% (2)
so: - 0.02% (2)
'orcl:data_108'; - 0.02% (2)
'orcl:data_107' - 0.02% (2)
'orcl:data_106' - 0.02% (2)
'orcl:data_105' - 0.02% (2)
embedded - 0.02% (2)
thought - 0.02% (2)
helps - 0.02% (2)
reporting - 0.02% (2)
months. - 0.02% (2)
prompts - 0.02% (2)
266 - 0.02% (2)
in-house - 0.02% (2)
sskgxpt - 0.02% (2)
allowed - 0.02% (2)
ipc - 0.02% (2)
interconnect - 0.02% (2)
critical - 0.02% (2)
flag - 0.02% (2)
initial - 0.02% (2)
body - 0.02% (2)
looking - 0.02% (2)
wondering - 0.02% (2)
lightning - 0.02% (2)
snippets - 0.02% (2)
maintained - 0.02% (2)
get_asm() - 0.02% (2)
regardless - 0.02% (2)
particular - 0.02% (2)
at, - 0.02% (2)
choose - 0.02% (2)
exists - 0.02% (2)
individual - 0.02% (2)
skip - 0.02% (2)
this. - 0.02% (2)
logging - 0.02% (2)
unlimited - 0.02% (2)
type, - 0.02% (2)
serveroutput - 0.02% (2)
login.sql - 0.02% (2)
piping - 0.02% (2)
parameters. - 0.02% (2)
nowhere - 0.02% (2)
dba/infrastructure - 0.02% (2)
4x400gb - 0.02% (2)
800gb - 0.02% (2)
existing - 0.02% (2)
runs - 0.02% (2)
internet - 0.02% (2)
operated - 0.02% (2)
company - 0.02% (2)
gave - 0.02% (2)
thankfully - 0.02% (2)
powerful - 0.02% (2)
likely - 0.02% (2)
solved - 0.02% (2)
mysql_script.sql - 0.02% (2)
${sqlscript}.log - 0.02% (2)
"inst_id|^--|${asmdisk}" - 0.02% (2)
spool - 0.02% (2)
feed - 0.02% (2)
./exec_sql.sh - 0.02% (2)
provide - 0.02% (2)
/*+ - 0.02% (2)
btt_pk; - 0.02% (2)
constraint - 0.02% (2)
table_owner.big_target_table - 0.02% (2)
rownum - 0.02% (2)
package - 0.02% (2)
wasn’t - 0.02% (2)
invaluable - 0.02% (2)
input - 0.02% (2)
process: - 0.02% (2)
of the - 0.49% (62)
to the - 0.45% (56)
in the - 0.41% (51)
on the - 0.35% (44)
the data - 0.3% (38)
and the - 0.26% (33)
.... : - 0.26% (33)
at the - 0.22% (28)
........ : - 0.19% (24)
bulk copy - 0.18% (23)
0 appuser_owner.dbjobrequests - 0.18% (23)
if you - 0.17% (21)
the database - 0.16% (20)
as sysdba - 0.15% (19)
data set - 0.15% (19)
you can - 0.15% (19)
and it - 0.14% (18)
on set - 0.14% (17)
here i - 0.14% (17)
trace file - 0.14% (17)
and in - 0.14% (17)
for the - 0.14% (17)
the sql - 0.14% (17)
sql trace - 0.13% (16)
from the - 0.13% (16)
we can - 0.13% (16)
big data - 0.13% (16)
│   │   - 0.13% (16)
that i - 0.12% (15)
sqlplus / - 0.12% (15)
the above - 0.12% (15)
the replication - 0.12% (15)
......... : - 0.12% (15)
for this - 0.12% (15)
and then - 0.11% (14)
we are - 0.11% (14)
into the - 0.11% (14)
the bulk - 0.11% (14)
the two - 0.11% (14)
use the - 0.1% (13)
here are - 0.1% (13)
appuser_owner.dbjobrequests 3 - 0.1% (13)
find the - 0.1% (13)
to get - 0.1% (13)
the following - 0.1% (13)
for example - 0.1% (13)
that a - 0.1% (13)
that the - 0.1% (13)
the last - 0.1% (12)
with a - 0.1% (12)
to find - 0.1% (12)
here is - 0.1% (12)
sqlplus -s - 0.1% (12)
to use - 0.09% (11)
the same - 0.09% (11)
vitaliy mogilevskiy - 0.09% (11)
it was - 0.09% (11)
an oracle - 0.09% (11)
this is - 0.09% (11)
│   ├── - 0.09% (11)
there is - 0.09% (11)
you are - 0.09% (11)
the script - 0.09% (11)
the new - 0.09% (11)
the index - 0.09% (11)
the merge - 0.08% (10)
in: operations - 0.08% (10)
– we - 0.08% (10)
posted in: - 0.08% (10)
in this - 0.08% (10)
is the - 0.08% (10)
need to - 0.08% (10)
there are - 0.08% (10)
use it - 0.08% (10)
during the - 0.08% (10)
it’s a - 0.08% (10)
when you - 0.07% (9)
when i - 0.07% (9)
from a - 0.07% (9)
using a - 0.07% (9)
as follows: - 0.07% (9)
with the - 0.07% (9)
to make - 0.07% (9)
have a - 0.07% (9)
will be - 0.07% (9)
like to - 0.07% (9)
sql id - 0.07% (9)
1 oracle - 0.07% (9)
oracle oinstall - 0.07% (9)
if the - 0.07% (9)
because i - 0.07% (9)
which i - 0.06% (8)
migration service - 0.06% (8)
here’s a - 0.06% (8)
new data - 0.06% (8)
and no - 0.06% (8)
the source - 0.06% (8)
at this - 0.06% (8)
what i - 0.06% (8)
index rebuild - 0.06% (8)
a simple - 0.06% (8)
target database - 0.06% (8)
-rw-r----- 1 - 0.06% (8)
this point - 0.06% (8)
the output - 0.06% (8)
and to - 0.06% (8)
by the - 0.06% (8)
– the - 0.06% (8)
should be - 0.06% (8)
shell script - 0.06% (8)
it’s output - 0.06% (8)
appuser_owner.dbjobrequests 6 - 0.06% (8)
a shell - 0.06% (8)
would be - 0.06% (8)
the oracle - 0.06% (8)
all the - 0.06% (8)
using the - 0.06% (8)
of this - 0.06% (7)
easy to - 0.06% (7)
get the - 0.06% (7)
time i - 0.06% (7)
sql and - 0.06% (7)
this session - 0.06% (7)
save the - 0.06% (7)
batch job - 0.06% (7)
the trace - 0.06% (7)
source database - 0.06% (7)
because the - 0.06% (7)
the “backfill” - 0.06% (7)
way to - 0.06% (7)
rac cluster - 0.06% (7)
2015 posted - 0.06% (7)
for an - 0.06% (7)
for each - 0.06% (7)
the sqlplus - 0.06% (7)
the problem - 0.06% (7)
we have - 0.06% (7)
database a - 0.06% (7)
as soon - 0.06% (7)
make sure - 0.06% (7)
data service - 0.06% (7)
connect / - 0.06% (7)
soon as - 0.06% (7)
– it - 0.06% (7)
output of - 0.06% (7)
a migration - 0.06% (7)
the most - 0.06% (7)
you have - 0.06% (7)
next step - 0.06% (7)
create a - 0.05% (6)
instead of - 0.05% (6)
as the - 0.05% (6)
eventorex mailing - 0.05% (6)
the next - 0.05% (6)
directly in - 0.05% (6)
mailing list - 0.05% (6)
to our - 0.05% (6)
operations, scripts - 0.05% (6)
script is - 0.05% (6)
lets say - 0.05% (6)
here we - 0.05% (6)
oracle rac - 0.05% (6)
a script - 0.05% (6)
able to - 0.05% (6)
sqlplus /nolog - 0.05% (6)
in: operations, - 0.05% (6)
database server - 0.05% (6)
are after - 0.05% (6)
│   └── - 0.05% (6)
after the - 0.05% (6)
on for - 0.05% (6)
the job - 0.05% (6)
format 9999 - 0.05% (6)
monitoring system - 0.05% (6)
– you - 0.05% (6)
and for - 0.05% (6)
on all - 0.05% (6)
that is - 0.05% (6)
all of - 0.05% (6)
a problem - 0.05% (6)
have the - 0.05% (6)
do you - 0.05% (6)
or any - 0.05% (6)
the sid - 0.05% (6)
data migration - 0.05% (6)
of data - 0.05% (6)
replication i - 0.05% (6)
service and - 0.05% (6)
we then - 0.04% (5)
| grep - 0.04% (5)
my newsletter - 0.04% (5)
in fact - 0.04% (5)
two data - 0.04% (5)
and get - 0.04% (5)
that we - 0.04% (5)
0 total - 0.04% (5)
subscribe vitaliy - 0.04% (5)
tail -f - 0.04% (5)
this script - 0.04% (5)
take a - 0.04% (5)
root cause - 0.04% (5)
directly on - 0.04% (5)
trace and - 0.04% (5)
file and - 0.04% (5)
a very - 0.04% (5)
the db - 0.04% (5)
thing i - 0.04% (5)
scripts tags: - 0.04% (5)
access to - 0.04% (5)
output to - 0.04% (5)
such a - 0.04% (5)
allows me - 0.04% (5)
the live - 0.04% (5)
mogilevskiy december - 0.04% (5)
a consistent - 0.04% (5)
i simply - 0.04% (5)
set of - 0.04% (5)
and this - 0.04% (5)
for any - 0.04% (5)
when the - 0.04% (5)
example of - 0.04% (5)
the downtime - 0.04% (5)
<script in - 0.04% (5)
me for - 0.04% (5)
number of - 0.04% (5)
---------- ---------- - 0.04% (5)
what you - 0.04% (5)
running a - 0.04% (5)
found this - 0.04% (5)
sign up - 0.04% (5)
the process - 0.04% (5)
database node - 0.04% (5)
you need - 0.04% (5)
over the - 0.04% (5)
/nolog <script to - 0.04% (5)
i then - 0.04% (5)
under a - 0.04% (5)
this case - 0.04% (5)
a real - 0.04% (5)
of it’s - 0.04% (5)
trims on - 0.04% (5)
the top - 0.04% (5)
and call - 0.04% (5)
daily work - 0.04% (5)
trunc col - 0.04% (5)
index file - 0.04% (5)
data is - 0.04% (5)
do this - 0.04% (5)
set trims - 0.04% (5)
set lines - 0.04% (5)
and not - 0.04% (5)
the core - 0.04% (5)
this reason - 0.04% (5)
the root - 0.04% (5)
problem solving - 0.04% (5)
us and - 0.04% (5)
on this - 0.03% (4)
the culprit - 0.03% (4)
it’s contents - 0.03% (4)
think it’s - 0.03% (4)
makes it - 0.03% (4)
a10 col - 0.03% (4)
and instead - 0.03% (4)
long running - 0.03% (4)
call it - 0.03% (4)
reason i - 0.03% (4)
you might - 0.03% (4)
session is - 0.03% (4)
and that - 0.03% (4)
it allows - 0.03% (4)
a good - 0.03% (4)
set tab - 0.03% (4)
alter session - 0.03% (4)
here’s the - 0.03% (4)
the event - 0.03% (4)
set pages - 0.03% (4)
script that - 0.03% (4)
a call - 0.03% (4)
through the - 0.03% (4)
– that’s - 0.03% (4)
oracle tablespace - 0.03% (4)
migration strategy - 0.03% (4)
a file - 0.03% (4)
and if - 0.03% (4)
tab off - 0.03% (4)
in your - 0.03% (4)
database and - 0.03% (4)
let me - 0.03% (4)
that will - 0.03% (4)
keep the - 0.03% (4)
tells sqlplus - 0.03% (4)
– it’s - 0.03% (4)
source database. - 0.03% (4)
shared storage - 0.03% (4)
enable sql - 0.03% (4)
where you - 0.03% (4)
and would - 0.03% (4)
turned on - 0.03% (4)
the sid/serial# - 0.03% (4)
cause of - 0.03% (4)
for all - 0.03% (4)
them to - 0.03% (4)
it should - 0.03% (4)
team. and - 0.03% (4)
other hand - 0.03% (4)
the asm - 0.03% (4)
the value - 0.03% (4)
is doing - 0.03% (4)
copy a - 0.03% (4)
9999 col - 0.03% (4)
and it’s - 0.03% (4)
it from - 0.03% (4)
go over - 0.03% (4)
a full - 0.03% (4)
a special - 0.03% (4)
| head - 0.03% (4)
this makes - 0.03% (4)
run the - 0.03% (4)
oracle database - 0.03% (4)
function and - 0.03% (4)
as it’s - 0.03% (4)
sqlplus from - 0.03% (4)
show you - 0.03% (4)
this will - 0.03% (4)
to start - 0.03% (4)
and an - 0.03% (4)
the previous - 0.03% (4)
set in - 0.03% (4)
amount of - 0.03% (4)
id from - 0.03% (4)
is not - 0.03% (4)
the testgrp - 0.03% (4)
the target - 0.03% (4)
step is - 0.03% (4)
easier to - 0.03% (4)
such as - 0.03% (4)
the count - 0.03% (4)
make it - 0.03% (4)
work a - 0.03% (4)
it’s very - 0.03% (4)
oracle db - 0.03% (4)
back in - 0.03% (4)
used to - 0.03% (4)
new database - 0.03% (4)
then the - 0.03% (4)
selected. elapsed: - 0.03% (4)
i have - 0.03% (4)
rows selected. - 0.03% (4)
go live - 0.03% (4)
to you - 0.03% (4)
you found - 0.03% (4)
this article - 0.03% (4)
matching data - 0.03% (4)
that was - 0.03% (4)
the storage - 0.03% (4)
: gm424t8fyx3w6 - 0.03% (4)
because a - 0.03% (4)
sure to - 0.03% (4)
note taking - 0.03% (4)
operations tags: - 0.03% (4)
plain text - 0.03% (4)
are the - 0.03% (4)
-rw-r--r--@ 1 - 0.03% (4)
start the - 0.03% (4)
replication is - 0.03% (4)
mve staff - 0.03% (4)
asm and - 0.03% (4)
name ... - 0.03% (4)
work file - 0.03% (4)
transformation rules - 0.03% (4)
core data - 0.03% (4)
next i - 0.03% (4)
./lasmdsk.sh “_105|_106|_107|_108” - 0.03% (4)
execute on - 0.02% (3)
the name - 0.02% (3)
format a15 - 0.02% (3)
private beta - 0.02% (3)
lot of - 0.02% (3)
the third - 0.02% (3)
is when - 0.02% (3)
database server. - 0.02% (3)
armed with - 0.02% (3)
a12 col - 0.02% (3)
you think - 0.02% (3)
this trace - 0.02% (3)
days back - 0.02% (3)
to list - 0.02% (3)
├── client1 - 0.02% (3)
process. i - 0.02% (3)
and run - 0.02% (3)
is just - 0.02% (3)
0 0.00 - 0.02% (3)
0.00 0 - 0.02% (3)
jdbc thin - 0.02% (3)
client sql_text - 0.02% (3)
can find - 0.02% (3)
elapsed time - 0.02% (3)
blocked objects - 0.02% (3)
part of - 0.02% (3)
lines 300 - 0.02% (3)
can quickly - 0.02% (3)
the right - 0.02% (3)
will also - 0.02% (3)
for example, - 0.02% (3)
in fact, - 0.02% (3)
top of - 0.02% (3)
when a - 0.02% (3)
on target - 0.02% (3)
s.paddr = - 0.02% (3)
event format - 0.02% (3)
we did - 0.02% (3)
300 set - 0.02% (3)
name is - 0.02% (3)
what if - 0.02% (3)
not to - 0.02% (3)
to ensure - 0.02% (3)
if your - 0.02% (3)
that all - 0.02% (3)
each of - 0.02% (3)
years later - 0.02% (3)
how it - 0.02% (3)
"x" ]; - 0.02% (3)
]; then - 0.02% (3)
== "x" - 0.02% (3)
notes in - 0.02% (3)
development team - 0.02% (3)
in one - 0.02% (3)
from and - 0.02% (3)
of every - 0.02% (3)
to solve - 0.02% (3)
something i - 0.02% (3)
name of - 0.02% (3)
my notes - 0.02% (3)
this – - 0.02% (3)
to look - 0.02% (3)
have to - 0.02% (3)
i wrote - 0.02% (3)
then use - 0.02% (3)
it would - 0.02% (3)
the database. - 0.02% (3)
and indexed - 0.02% (3)
now we - 0.02% (3)
because it - 0.02% (3)
out the - 0.02% (3)
long term - 0.02% (3)
the developer - 0.02% (3)
above script - 0.02% (3)
1234 action - 0.02% (3)
format 99999 - 0.02% (3)
format a20 - 0.02% (3)
format a12 - 0.02% (3)
exit eof - 0.02% (3)
like it - 0.02% (3)
with this - 0.02% (3)
disks to - 0.02% (3)
{2..6} do - 0.02% (3)
execute a - 0.02% (3)
to egrep - 0.02% (3)
and pipe - 0.02% (3)
and let - 0.02% (3)
it with - 0.02% (3)
in {2..6} - 0.02% (3)
-s /nolog - 0.02% (3)
{print $nf} - 0.02% (3)
between the - 0.02% (3)
production data - 0.02% (3)
set timing - 0.02% (3)
– here’s - 0.02% (3)
dba team. - 0.02% (3)
case you - 0.02% (3)
an issue - 0.02% (3)
oracle dba - 0.02% (3)
space consumption - 0.02% (3)
oradebug tracefile_name - 0.02% (3)
sqlplus to - 0.02% (3)
oracle server - 0.02% (3)
to sqlplus - 0.02% (3)
it’s contents: - 0.02% (3)
cluster using - 0.02% (3)
we were - 0.02% (3)
historical data - 0.02% (3)
in our - 0.02% (3)
is running - 0.02% (3)
if asm - 0.02% (3)
luns to - 0.02% (3)
asm diagnostics - 0.02% (3)
real production - 0.02% (3)
login to - 0.02% (3)
generated by - 0.02% (3)
our space - 0.02% (3)
diagnostics script - 0.02% (3)
shell function - 0.02% (3)
the rac - 0.02% (3)
build a - 0.02% (3)
header_status = - 0.02% (3)
group by - 0.02% (3)
from gv$session - 0.02% (3)
action format - 0.02% (3)
what the - 0.02% (3)
as sysasm - 0.02% (3)
possible to - 0.02% (3)
echo on - 0.02% (3)
same time - 0.02% (3)
sql script - 0.02% (3)
was at - 0.02% (3)
a connection - 0.02% (3)
dbt process - 0.02% (3)
null prev - 0.02% (3)
pipe it’s - 0.02% (3)
username ....... - 0.02% (3)
wrap the - 0.02% (3)
it for - 0.02% (3)
and use - 0.02% (3)
shell variable - 0.02% (3)
: app1user_name - 0.02% (3)
put the - 0.02% (3)
shell scripting - 0.02% (3)
the staging - 0.02% (3)
2016 posted - 0.02% (3)
i will - 0.02% (3)
below: subscribe - 0.02% (3)
release them - 0.02% (3)
can be - 0.02% (3)
dig deep - 0.02% (3)
receive more - 0.02% (3)
client info - 0.02% (3)
article helpful - 0.02% (3)
null machine - 0.02% (3)
rcdml process - 0.02% (3)
an explain - 0.02% (3)
serial ......... - 0.02% (3)
that you - 0.02% (3)
that it - 0.02% (3)
list vitaliy - 0.02% (3)
these scripts - 0.02% (3)
set echo - 0.02% (3)
time on - 0.02% (3)
timing on - 0.02% (3)
are it’s - 0.02% (3)
up for - 0.02% (3)
and here - 0.02% (3)
the eventorex - 0.02% (3)
work at - 0.02% (3)
the private - 0.02% (3)
you don’t - 0.02% (3)
sid ............ - 0.02% (3)
on your - 0.02% (3)
---------- ----------- - 0.02% (3)
i don’t - 0.02% (3)
ps -ef - 0.02% (3)
----------- ---------- - 0.02% (3)
see the - 0.02% (3)
prefer to - 0.02% (3)
connect to - 0.02% (3)
sqlplus and - 0.02% (3)
blocked session - 0.02% (3)
to your - 0.02% (3)
oracle asm - 0.02% (3)
i hope - 0.02% (3)
because there - 0.02% (3)
consistent data - 0.02% (3)
with an - 0.02% (3)
i’ll show - 0.02% (3)
the migration - 0.02% (3)
to sign - 0.02% (3)
the use - 0.02% (3)
an easy - 0.02% (3)
copy and - 0.02% (3)
data validation - 0.02% (3)
a repeatable - 0.02% (3)
a group - 0.02% (3)
i’d like - 0.02% (3)
before the - 0.02% (3)
how to - 0.02% (3)
is already - 0.02% (3)
we need - 0.02% (3)
tkprof to - 0.02% (3)
produce a - 0.02% (3)
the rcdml - 0.02% (3)
the counts - 0.02% (3)
into a - 0.02% (3)
eharmony matching - 0.02% (3)
“legacy” database - 0.02% (3)
the merge. - 0.02% (3)
the writes - 0.02% (3)
matching database - 0.02% (3)
job is - 0.02% (3)
give you - 0.02% (3)
i still - 0.02% (3)
after a - 0.02% (3)
want to - 0.02% (3)
tkprof is - 0.02% (3)
migration process - 0.02% (3)
the legacy - 0.02% (3)
where this - 0.02% (3)
you get - 0.02% (3)
our own - 0.02% (3)
we also - 0.02% (3)
the end - 0.02% (3)
point the - 0.02% (3)
day to - 0.02% (3)
newsletter below: - 0.02% (3)
mogilevskiy january - 0.02% (3)
needs to - 0.02% (3)
are doing - 0.02% (3)
a long - 0.02% (3)
them make - 0.02% (3)
rebuild workers - 0.02% (3)
the queue - 0.02% (3)
contains a - 0.02% (3)
billion matches - 0.02% (3)
this data - 0.02% (3)
to work - 0.02% (3)
keep a - 0.02% (3)
you did - 0.02% (3)
one hop - 0.02% (3)
ensures that - 0.02% (3)
would like - 0.02% (3)
query plan - 0.02% (3)
turn the - 0.02% (3)
it work - 0.02% (3)
at any - 0.02% (3)
– all - 0.02% (3)
helpful and - 0.02% (3)
time. i - 0.02% (3)
to receive - 0.02% (3)
i release - 0.02% (3)
the dba - 0.02% (3)
that can - 0.02% (3)
more like - 0.02% (3)
doing this - 0.02% (3)
and when - 0.02% (3)
so instead - 0.02% (3)
data transfer - 0.02% (3)
– this - 0.02% (3)
use ash - 0.02% (3)
focus on - 0.02% (3)
snapshot of - 0.02% (3)
the big - 0.02% (3)
i prefer - 0.02% (3)
merge process - 0.02% (3)
the snapshot - 0.02% (3)
poorly performing - 0.02% (3)
do this: - 0.02% (3)
“dual-write” migration - 0.02% (3)
hop data - 0.02% (3)
back and - 0.02% (3)
here’s an - 0.02% (3)
your monitoring - 0.02% (3)
much easier - 0.02% (3)
which we - 0.02% (3)
to run - 0.02% (3)
a single - 0.02% (3)
two sets - 0.02% (3)
capable of - 0.02% (3)
the “run-doc” - 0.02% (3)
once the - 0.02% (3)
might be - 0.02% (3)
other hand, - 0.02% (3)
a custom - 0.02% (3)
copy tools - 0.02% (3)
it and - 0.02% (3)
the number - 0.02% (3)
how much - 0.02% (3)
if it’s - 0.02% (3)
do the - 0.02% (3)
the actual - 0.02% (3)
most of - 0.02% (3)
or your - 0.02% (3)
node and - 0.02% (3)
get on - 0.02% (3)
in sync - 0.02% (3)
process and - 0.02% (3)
doing so - 0.02% (2)
but instead - 0.02% (2)
“backfill” batch - 0.02% (2)
then i - 0.02% (2)
we developed - 0.02% (2)
it works - 0.02% (2)
belongs to - 0.02% (2)
in question - 0.02% (2)
– sign - 0.02% (2)
problem? i - 0.02% (2)
monitoring framework - 0.02% (2)
4 disks - 0.02% (2)
via sqlplus - 0.02% (2)
grew in - 0.02% (2)
segments grew - 0.02% (2)
that on - 0.02% (2)
tablespace data - 0.02% (2)
it’s ready - 0.02% (2)
the discussion - 0.02% (2)
previous step - 0.02% (2)
cassandra back - 0.02% (2)
copy, and - 0.02% (2)
each data - 0.02% (2)
we start - 0.02% (2)
the reason - 0.02% (2)
appuser_owner.dbjobrequests 4 - 0.02% (2)
and finally - 0.02% (2)
to deploy - 0.02% (2)
in how - 0.02% (2)
asm instance - 0.02% (2)
what we - 0.02% (2)
parse the - 0.02% (2)
instance id - 0.02% (2)
sessions from - 0.02% (2)
new datastore. - 0.02% (2)
it’s not - 0.02% (2)
to keep - 0.02% (2)
just an - 0.02% (2)
command and - 0.02% (2)
replicated data - 0.02% (2)
beta as - 0.02% (2)
i just - 0.02% (2)
the “dual-write” - 0.02% (2)
it’s the - 0.02% (2)
(no spam - 0.02% (2)
process can - 0.02% (2)
guarantee it!): - 0.02% (2)
to filter - 0.02% (2)
gv$lock and - 0.02% (2)
here’s how - 0.02% (2)
a similar - 0.02% (2)
to cassandra - 0.02% (2)
and here’s - 0.02% (2)
an example - 0.02% (2)
is blocking - 0.02% (2)
objects from - 0.02% (2)
we create - 0.02% (2)
given ts - 0.02% (2)
it over - 0.02% (2)
state is - 0.02% (2)
'orcl:data_106' , - 0.02% (2)
'orcl:data_107' , - 0.02% (2)
using lasmdsk.sh: - 0.02% (2)
for some - 0.02% (2)
, 'orcl:data_108'; - 0.02% (2)
a wrong - 0.02% (2)
came up - 0.02% (2)
'orcl:data_105' , - 0.02% (2)
, 'orcl:data_107' - 0.02% (2)
, 'orcl:data_106' - 0.02% (2)
all possible - 0.02% (2)
disk 'orcl:data_105' - 0.02% (2)
i think - 0.02% (2)
community and - 0.02% (2)
you could - 0.02% (2)
this migration - 0.02% (2)
diskgroup prod_data1 - 0.02% (2)
monitor oracle - 0.02% (2)
the following: - 0.02% (2)
like so: - 0.02% (2)
staging schema - 0.02% (2)
do ssh - 0.02% (2)
racdb0${x} /oracle/dba/bin/mntgrp.sh - 0.02% (2)
done i - 0.02% (2)
mounted on - 0.02% (2)
all nodes - 0.02% (2)
./lasmdsk.sh testgrp - 0.02% (2)
mount the - 0.02% (2)
nodes in - 0.02% (2)
system that - 0.02% (2)
where i - 0.02% (2)
then we - 0.02% (2)
lasmdsk.sh: ./lasmdsk.sh - 0.02% (2)
and i/o - 0.02% (2)
the real - 0.02% (2)
tablespace monitoring - 0.02% (2)
diskgroup testgrp - 0.02% (2)
snapshot to - 0.02% (2)
cause in - 0.02% (2)
“backfill” process - 0.02% (2)
cluster every - 0.02% (2)
two months. - 0.02% (2)
dealing with - 0.02% (2)
the issue - 0.02% (2)
a series - 0.02% (2)
you how - 0.02% (2)
disk group - 0.02% (2)
and db - 0.02% (2)
two scripts - 0.02% (2)
fast extending - 0.02% (2)
for given - 0.02% (2)
last datafile - 0.02% (2)
let it - 0.02% (2)
it’s important - 0.02% (2)
our rac - 0.02% (2)
new disks - 0.02% (2)
any scale. - 0.02% (2)
very large - 0.02% (2)
space in - 0.02% (2)
3084 6 - 0.02% (2)
can save - 0.02% (2)
your own - 0.02% (2)
new luns - 0.02% (2)
company that - 0.02% (2)
operated a - 0.02% (2)
internet site - 0.02% (2)
awk '{print - 0.02% (2)
and our - 0.02% (2)
important to - 0.02% (2)
whole data - 0.02% (2)
the rate - 0.02% (2)
of 800gb - 0.02% (2)
this technique - 0.02% (2)
adding 4x400gb - 0.02% (2)
----------- ----------- - 0.02% (2)
from dbjobrequests - 0.02% (2)
the “big - 0.02% (2)
to store - 0.02% (2)
help you - 0.02% (2)
find relevant - 0.02% (2)
point to - 0.02% (2)
use text - 0.02% (2)
on them - 0.02% (2)
through a - 0.02% (2)
carefully crafted - 0.02% (2)
be read - 0.02% (2)
you know - 0.02% (2)
switch to - 0.02% (2)
a solution - 0.02% (2)
store all - 0.02% (2)
primary database - 0.02% (2)
my personal - 0.02% (2)
go directly - 0.02% (2)
file for - 0.02% (2)
folder structure - 0.02% (2)
– during - 0.02% (2)
step process - 0.02% (2)
very simple - 0.02% (2)
the information - 0.02% (2)
├── project_a - 0.02% (2)
based notes - 0.02% (2)
a target - 0.02% (2)
– there - 0.02% (2)
a whole - 0.02% (2)
the go-live - 0.02% (2)
validation of - 0.02% (2)
set and - 0.02% (2)
our community - 0.02% (2)
join our - 0.02% (2)
the overlap - 0.02% (2)
doing a - 0.02% (2)
called in - 0.02% (2)
and easier - 0.02% (2)
source to - 0.02% (2)
solution i’ve - 0.02% (2)
it even - 0.02% (2)
a great - 0.02% (2)
aware of - 0.02% (2)
be accomplished - 0.02% (2)
identify the - 0.02% (2)
consists of - 0.02% (2)
project_b │   - 0.02% (2)
internal use - 0.02% (2)
my clients - 0.02% (2)
my text - 0.02% (2)
open all - 0.02% (2)
files under - 0.02% (2)
a specific - 0.02% (2)
all files - 0.02% (2)
from day - 0.02% (2)
to day - 0.02% (2)
the one - 0.02% (2)
text notes - 0.02% (2)
one single - 0.02% (2)
day. i - 0.02% (2)
the daily - 0.02% (2)
it will - 0.02% (2)
by doing - 0.02% (2)
the need - 0.02% (2)
relevant information - 0.02% (2)
will ensure - 0.02% (2)
find them - 0.02% (2)
extracted data - 0.02% (2)
the answer - 0.02% (2)
notes on - 0.02% (2)
└── project_c - 0.02% (2)
db – - 0.02% (2)
└── x_support - 0.02% (2)
a bolt-on - 0.02% (2)
to quickly - 0.02% (2)
– answer - 0.02% (2)
answer in - 0.02% (2)
a client - 0.02% (2)
category name - 0.02% (2)
happens to - 0.02% (2)
– you’ll - 0.02% (2)
where the - 0.02% (2)
results are - 0.02% (2)
extract or - 0.02% (2)
project notes - 0.02% (2)
on daily - 0.02% (2)
as simple - 0.02% (2)
best to - 0.02% (2)
project and - 0.02% (2)
transfer is - 0.02% (2)
jul 10 - 0.02% (2)
and every - 0.02% (2)
the final - 0.02% (2)
database cluster. - 0.02% (2)
gv$lock in - 0.02% (2)
where this_.workrequestid - 0.02% (2)
and this_.status=:2 - 0.02% (2)
for upda - 0.02% (2)
: 629vx81ykvhpp - 0.02% (2)
performance challenges - 0.02% (2)
self-join query - 0.02% (2)
against gv$lock - 0.02% (2)
query against - 0.02% (2)
tables and - 0.02% (2)
and performance - 0.02% (2)
simple matter - 0.02% (2)
gv$ table - 0.02% (2)
copy table - 0.02% (2)
historical purposes - 0.02% (2)
so that - 0.02% (2)
team and - 0.02% (2)
if there - 0.02% (2)
3 hours - 0.02% (2)
these components - 0.02% (2)
thi s_ - 0.02% (2)
as userid1_0_ - 0.02% (2)
ash monitoring - 0.02% (2)
gm424t8fyx3w6 displayed - 0.02% (2)
details from - 0.02% (2)
gv$session and - 0.02% (2)
is going - 0.02% (2)
from our - 0.02% (2)
target database. - 0.02% (2)
it’s because - 0.02% (2)
so far - 0.02% (2)
right before - 0.02% (2)
beyond it - 0.02% (2)
status1_0_, this_.userid - 0.02% (2)
data pump - 0.02% (2)
and can - 0.02% (2)
50 minutes - 0.02% (2)
---------------------------------------------------------------------- select - 0.02% (2)
this_.workrequestid as - 0.02% (2)
workrequ1_1_0_, this_.createtime - 0.02% (2)
createtime1_0_, this_.event_type - 0.02% (2)
as event3_1_0_, - 0.02% (2)
this_.status as - 0.02% (2)
it’s best - 0.02% (2)
it’s possible - 0.02% (2)
crafted white - 0.02% (2)
as possible - 0.02% (2)
fix the - 0.02% (2)
live on - 0.02% (2)
of instrumentation - 0.02% (2)
available to - 0.02% (2)
for in - 0.02% (2)
to utilize - 0.02% (2)
sampling is - 0.02% (2)
1/10th of - 0.02% (2)
for historical - 0.02% (2)
in awr - 0.02% (2)
purposes so - 0.02% (2)
from you - 0.02% (2)
discussion and - 0.02% (2)
ready (no - 0.02% (2)
spam here - 0.02% (2)
i guarantee - 0.02% (2)
it!): eventorex - 0.02% (2)
working with - 0.02% (2)
at oracle - 0.02% (2)
command line - 0.02% (2)
to target - 0.02% (2)
and with - 0.02% (2)
-1 for - 0.02% (2)
a high - 0.02% (2)
rac cluster. - 0.02% (2)
top waits - 0.02% (2)
oracle ash - 0.02% (2)
call from - 0.02% (2)
@h1 0700 - 0.02% (2)
give -1 - 0.02% (2)
for all] - 0.02% (2)
here’s what - 0.02% (2)
special script - 0.02% (2)
the problem? - 0.02% (2)
are no - 0.02% (2)
is that - 0.02% (2)
simple as - 0.02% (2)
simply do - 0.02% (2)
this: sqlplus - 0.02% (2)
sysdba @h1 - 0.02% (2)
0700 0930 - 0.02% (2)
going live - 0.02% (2)
a small - 0.02% (2)
the log - 0.02% (2)
does the - 0.02% (2)
for when - 0.02% (2)
and calls - 0.02% (2)
on it. - 0.02% (2)
udump directory - 0.02% (2)
can then - 0.02% (2)
after is - 0.02% (2)
ls -lta - 0.02% (2)
files will - 0.02% (2)
put it - 0.02% (2)
of everything - 0.02% (2)
out all - 0.02% (2)
sample the - 0.02% (2)
file name - 0.02% (2)
dependent on - 0.02% (2)
a complete - 0.02% (2)
dba/infrastructure team - 0.02% (2)
oinstall 1296 - 0.02% (2)
1296 dec - 0.02% (2)
1 00:23 - 0.02% (2)
sure it’s - 0.02% (2)
the waits - 0.02% (2)
services for - 0.02% (2)
file we - 0.02% (2)
dba account - 0.02% (2)
of enabling - 0.02% (2)
saying that - 0.02% (2)
database. an - 0.02% (2)
problem that - 0.02% (2)
example lets - 0.02% (2)
say that - 0.02% (2)
a performance - 0.02% (2)
that my - 0.02% (2)
server is - 0.02% (2)
that are - 0.02% (2)
example to - 0.02% (2)
and what - 0.02% (2)
all sessions - 0.02% (2)
on dbms_system. - 0.02% (2)
grant execute - 0.02% (2)
at ash - 0.02% (2)
your dba - 0.02% (2)
trace level - 0.02% (2)
file in - 0.02% (2)
low level - 0.02% (2)
it using - 0.02% (2)
file on - 0.02% (2)
that’s all - 0.02% (2)
matter of - 0.02% (2)
plan for - 0.02% (2)
to it’s - 0.02% (2)
to process - 0.02% (2)
off the - 0.02% (2)
sets of - 0.02% (2)
turn off - 0.02% (2)
during a - 0.02% (2)
to it! - 0.02% (2)
direct path - 0.02% (2)
the objective - 0.02% (2)
30k iops - 0.02% (2)
this operation - 0.02% (2)
was the - 0.02% (2)
the historical - 0.02% (2)
matches from - 0.02% (2)
dual-write migration - 0.02% (2)
transformation during - 0.02% (2)
we briefly - 0.02% (2)
path insert - 0.02% (2)
search for - 0.02% (2)
data stream - 0.02% (2)
merge afterwords - 0.02% (2)
down the - 0.02% (2)
using tkprof - 0.02% (2)
all you - 0.02% (2)
all there - 0.02% (2)
now that - 0.02% (2)
most likely - 0.02% (2)
team will - 0.02% (2)
file generated - 0.02% (2)
message broker - 0.02% (2)
can open - 0.02% (2)
grep "total" - 0.02% (2)
a batch - 0.02% (2)
0.66 0 - 0.02% (2)
up the - 0.02% (2)
a separate - 0.02% (2)
(note: the - 0.02% (2)
rows and - 0.02% (2)
to insert - 0.02% (2)
data load - 0.02% (2)
readable report - 0.02% (2)
a trace - 0.02% (2)
that automatically - 0.02% (2)
copy begins - 0.02% (2)
'||s.action maction - 0.02% (2)
node the - 0.02% (2)
exists in - 0.02% (2)
execute it - 0.02% (2)
called topas - 0.02% (2)
sqlplus as - 0.02% (2)
some overlap - 0.02% (2)
giving you - 0.02% (2)
following shell - 0.02% (2)
col seconds_in_wait - 0.02% (2)
it very - 0.02% (2)
the poorly - 0.02% (2)
to give - 0.02% (2)
me the - 0.02% (2)
the machine - 0.02% (2)
was submitted - 0.02% (2)
i always - 0.02% (2)
– then - 0.02% (2)
trace on - 0.02% (2)
p.spid, s.module||' - 0.02% (2)
a30 trunc - 0.02% (2)
to tell - 0.02% (2)
and ask - 0.02% (2)
above snippet - 0.02% (2)
handled by - 0.02% (2)
head on - 0.02% (2)
are so - 0.02% (2)
60 set - 0.02% (2)
connection to - 0.02% (2)
developers to - 0.02% (2)
you that - 0.02% (2)
col sid - 0.02% (2)
process format - 0.02% (2)
col serial# - 0.02% (2)
format 999999 - 0.02% (2)
col username - 0.02% (2)
them later - 0.02% (2)
col machine - 0.02% (2)
osuser format - 0.02% (2)
maction format - 0.02% (2)
spid format - 0.02% (2)
batch jobs - 0.02% (2)
to identify - 0.02% (2)
set head - 0.02% (2)
doing to - 0.02% (2)
enabling sql - 0.02% (2)
example, lets - 0.02% (2)
can use - 0.02% (2)
point we - 0.02% (2)
can finally - 0.02% (2)
hands of - 0.02% (2)
this session. - 0.02% (2)
even touch - 0.02% (2)
step of - 0.02% (2)
running and - 0.02% (2)
value of - 0.02% (2)
sid/serial# for - 0.02% (2)
the slow - 0.02% (2)
trace for - 0.02% (2)
i believe - 0.02% (2)
is turned - 0.02% (2)
on – - 0.02% (2)
waits and - 0.02% (2)
calls this - 0.02% (2)
performing batch - 0.02% (2)
a developer - 0.02% (2)
and client_id - 0.02% (2)
pages 60 - 0.02% (2)
machine format - 0.02% (2)
system will - 0.02% (2)
a problem. - 0.02% (2)
off col - 0.02% (2)
99 heading - 0.02% (2)
sid format - 0.02% (2)
serial# format - 0.02% (2)
999999 col - 0.02% (2)
username format - 0.02% (2)
a15 col - 0.02% (2)
col osuser - 0.02% (2)
submitted from - 0.02% (2)
col maction - 0.02% (2)
call to - 0.02% (2)
s.sid,s.serial#,s.username, s.status,s.osuser, - 0.02% (2)
the fundamental - 0.02% (2)
above code - 0.02% (2)
data services - 0.02% (2)
have an - 0.02% (2)
machine the - 0.02% (2)
job was - 0.02% (2)
set is - 0.02% (2)
it. this - 0.02% (2)
of asm - 0.02% (2)
the time - 0.02% (2)
contain the - 0.02% (2)
wrapper script - 0.02% (2)
scripts – - 0.02% (2)
series of - 0.02% (2)
data sets - 0.02% (2)
have it’s - 0.02% (2)
it takes - 0.02% (2)
insert append - 0.02% (2)
am now - 0.02% (2)
during production - 0.02% (2)
on alter - 0.02% (2)
session set - 0.02% (2)
session enable - 0.02% (2)
the system - 0.02% (2)
enable parallel - 0.02% (2)
sid from - 0.02% (2)
rownum = - 0.02% (2)
a wrapper - 0.02% (2)
developed in-house - 0.02% (2)
each individual - 0.02% (2)
be able - 0.02% (2)
running process - 0.02% (2)
oradebug ipc - 0.02% (2)
after all - 0.02% (2)
where we - 0.02% (2)
however that - 0.02% (2)
set from - 0.02% (2)
in shell - 0.02% (2)
example here’s - 0.02% (2)
to verify - 0.02% (2)
that replication - 0.02% (2)
data flow - 0.02% (2)
execute get_asm - 0.02% (2)
us the - 0.02% (2)
to think - 0.02% (2)
i used - 0.02% (2)
validation phase - 0.02% (2)
lines 132 - 0.02% (2)
| egrep - 0.02% (2)
thing you - 0.02% (2)
if asmdisk - 0.02% (2)
it wasn’t - 0.02% (2)
performance metrics - 0.02% (2)
i call - 0.02% (2)
it also - 0.02% (2)
were adding - 0.02% (2)
the prior - 0.02% (2)
a company - 0.02% (2)
that operated - 0.02% (2)
large internet - 0.02% (2)
site and - 0.02% (2)
consumption was - 0.02% (2)
rate of - 0.02% (2)
800gb every - 0.02% (2)
4x400gb luns - 0.02% (2)
a storage - 0.02% (2)
of rows - 0.02% (2)
every two - 0.02% (2)
of space - 0.02% (2)
not only - 0.02% (2)
db and - 0.02% (2)
the amount - 0.02% (2)
the dual-write - 0.02% (2)
problem and - 0.02% (2)
the state - 0.02% (2)
go into - 0.02% (2)
the underlying - 0.02% (2)
sysdba set - 0.02% (2)
execute the - 0.02% (2)
a shared - 0.02% (2)
serveroutput on - 0.02% (2)
size unlimited - 0.02% (2)
select sid - 0.02% (2)
where rownum - 0.02% (2)
piping it’s - 0.02% (2)
data to - 0.02% (2)
pump tool - 0.02% (2)
following sql - 0.02% (2)
not the - 0.02% (2)
to monitor - 0.02% (2)
process using - 0.02% (2)
132 set - 0.02% (2)
staging database - 0.02% (2)
an email - 0.02% (2)
the pid - 0.02% (2)
outside of - 0.02% (2)
it – - 0.02% (2)
went over - 0.02% (2)
extract the - 0.02% (2)
in case - 0.02% (2)
watch out - 0.02% (2)
list of - 0.02% (2)
tnsnames.ora file. - 0.02% (2)
to answer - 0.02% (2)
it’s easy - 0.02% (2)
also the - 0.02% (2)
way of - 0.02% (2)
making a - 0.02% (2)
time and - 0.02% (2)
are executing - 0.02% (2)
their terminal. - 0.02% (2)
level of - 0.02% (2)
fundamental principles - 0.02% (2)
to connect - 0.02% (2)
low impact - 0.02% (2)
much more - 0.02% (2)
password by - 0.02% (2)
grep sqlplus - 0.02% (2)
from their - 0.02% (2)
a remote - 0.02% (2)
from my - 0.02% (2)
database is - 0.02% (2)
it’s also - 0.02% (2)
it when - 0.02% (2)
sqlplus script - 0.02% (2)
we simply - 0.02% (2)
writes to - 0.02% (2)
and now - 0.02% (2)
ahead of - 0.02% (2)
these stats - 0.02% (2)
we gathered - 0.02% (2)
i also - 0.02% (2)
process that - 0.02% (2)
output in - 0.02% (2)
and where - 0.02% (2)
main data - 0.02% (2)
stats it - 0.02% (2)
a total - 0.02% (2)
and i’ll - 0.02% (2)
sqlplus username@tns_alias - 0.02% (2)
it directly - 0.02% (2)
and place - 0.02% (2)
autonomous transaction - 0.02% (2)
replication process - 0.02% (2)
simply save - 0.02% (2)
takes a - 0.02% (2)
any questions - 0.02% (2)
them with - 0.02% (2)
order to - 0.02% (2)
find which - 0.02% (2)
sysdba oradebug - 0.02% (2)
dump systemstate - 0.02% (2)
266 oradebug - 0.02% (2)
tracefile_name eof - 0.02% (2)
we wrap - 0.02% (2)
eof words - 0.02% (2)
save these - 0.02% (2)
isolate bulk - 0.02% (2)
using this - 0.02% (2)
what it - 0.02% (2)
oradebug dump - 0.02% (2)
systemstate 266 - 0.02% (2)
8.9 billion - 0.02% (2)
problem – - 0.02% (2)
be wondering - 0.02% (2)
where a - 0.02% (2)
username/password or - 0.02% (2)
by function - 0.02% (2)
must have - 0.02% (2)
is setup - 0.02% (2)
off set - 0.02% (2)
username and - 0.02% (2)
with – - 0.02% (2)
on source - 0.02% (2)
between them. - 0.02% (2)
transform only - 0.02% (2)
accomplish this - 0.02% (2)
called login.sql - 0.02% (2)
contents: set - 0.02% (2)
defined by - 0.02% (2)
set serveroutput - 0.02% (2)
use this - 0.02% (2)
on size - 0.02% (2)
code in - 0.02% (2)
out of - 0.02% (2)
start sqlplus - 0.02% (2)
you’ll get - 0.02% (2)
the /nolog - 0.02% (2)
and go - 0.02% (2)
replication tool - 0.02% (2)
make a - 0.02% (2)
the query - 0.02% (2)
3 0 appuser_owner.dbjobrequests - 0.18% (23)
/ as sysdba - 0.14% (17)
the bulk copy - 0.1% (13)
│   │   │   - 0.1% (12)
0 appuser_owner.dbjobrequests 3 - 0.1% (12)
posted in: operations - 0.08% (10)
│   │   ├── - 0.08% (10)
sqlplus / as - 0.07% (9)
at this point - 0.06% (8)
to find the - 0.06% (8)
1 oracle oinstall - 0.06% (8)
the trace file - 0.06% (7)
in a shell - 0.06% (7)
connect / as - 0.06% (7)
to the new - 0.06% (7)
the new data - 0.06% (7)
as soon as - 0.06% (7)
0 appuser_owner.dbjobrequests 6 - 0.06% (7)
2015 posted in: - 0.06% (7)
posted in: operations, - 0.05% (6)
eventorex mailing list - 0.05% (6)
in: operations, scripts - 0.05% (6)
on the database - 0.05% (6)
│   │   └── - 0.05% (6)
soon as i - 0.05% (6)
the output of - 0.05% (6)
big data migration - 0.05% (6)
all of the - 0.05% (6)
on to the - 0.05% (6)
the two data - 0.04% (5)
<it’s output to - 0.04% (5)
in this case - 0.04% (5)
the next step - 0.04% (5)
the index rebuild - 0.04% (5)
subscribe vitaliy mogilevskiy - 0.04% (5)
vitaliy mogilevskiy december - 0.04% (5)
directly on the - 0.04% (5)
using the following - 0.04% (5)
operations, scripts tags: - 0.04% (5)
we are after - 0.04% (5)
the root cause - 0.04% (5)
for this reason - 0.04% (5)
sql trace and - 0.04% (5)
this makes it - 0.03% (4)
the new database - 0.03% (4)
format a10 col - 0.03% (4)
the replication is - 0.03% (4)
you found this - 0.03% (4)
output of the - 0.03% (4)
enable sql trace - 0.03% (4)
data migration strategy - 0.03% (4)
on set tab - 0.03% (4)
the sql trace - 0.03% (4)
sqlplus -s i - 0.03% (4)
to my newsletter - 0.03% (4)
we have the - 0.03% (4)
set tab off - 0.03% (4)
trims on set - 0.03% (4)
root cause of - 0.03% (4)
of the above - 0.03% (4)
if the replication - 0.03% (4)
rows selected. elapsed: - 0.03% (4)
if you found - 0.03% (4)
trace file and - 0.03% (4)
the source database. - 0.03% (4)
the core data - 0.03% (4)
and in this - 0.03% (4)
on the other - 0.03% (4)
find the root - 0.03% (4)
step is to - 0.03% (4)
1 mve staff - 0.03% (4)
it allows me - 0.03% (4)
allows me to - 0.03% (4)
in: operations tags: - 0.03% (4)
the target database - 0.03% (4)
on the oracle - 0.02% (3)
client sql_text ---------------------------------------------------------------------- - 0.02% (3)
lines 300 set - 0.02% (3)
how to use - 0.02% (3)
: jdbc thin - 0.02% (3)
1234 action ......... - 0.02% (3)
we need to - 0.02% (3)
set timing on - 0.02% (3)
are it’s contents: - 0.02% (3)
to use it - 0.02% (3)
process ........ : - 0.02% (3)
set time on - 0.02% (3)
cause of the - 0.02% (3)
the oracle server - 0.02% (3)
when i am - 0.02% (3)
get a call - 0.02% (3)
consistent data set - 0.02% (3)
your monitoring system - 0.02% (3)
......... : dbt - 0.02% (3)
"x" ]; then - 0.02% (3)
oracle asm diagnostics - 0.02% (3)
id ......... : - 0.02% (3)
osuser ......... : - 0.02% (3)
: null machine - 0.02% (3)
bulk copy tools - 0.02% (3)
client info .... - 0.02% (3)
id .... : - 0.02% (3)
null prev sql - 0.02% (3)
: app1user_name sql - 0.02% (3)
: 1234 action - 0.02% (3)
serial ......... : - 0.02% (3)
the eventorex mailing - 0.02% (3)
the database server. - 0.02% (3)
use the following - 0.02% (3)
sql trace i - 0.02% (3)
oracle rac cluster - 0.02% (3)
i prefer to - 0.02% (3)
dbt process ........ - 0.02% (3)
......... : jdbc - 0.02% (3)
null machine ........ - 0.02% (3)
......... : null - 0.02% (3)
format a12 col - 0.02% (3)
info .... : - 0.02% (3)
col event format - 0.02% (3)
displayed sql id - 0.02% (3)
i’ll show you - 0.02% (3)
the above script - 0.02% (3)
prev sql id - 0.02% (3)
app1user_name sql id - 0.02% (3)
thin client sql_text - 0.02% (3)
username ....... : - 0.02% (3)
newsletter below: subscribe - 0.02% (3)
sqlplus -s /nolog - 0.02% (3)
you have a - 0.02% (3)
save the above - 0.02% (3)
in the last - 0.02% (3)
and to make - 0.02% (3)
vitaliy mogilevskiy january - 0.02% (3)
to receive more - 0.02% (3)
sure to sign - 0.02% (3)
migration service and - 0.02% (3)
as i release - 0.02% (3)
them make sure - 0.02% (3)
to sign up - 0.02% (3)
below: subscribe vitaliy - 0.02% (3)
2016 posted in: - 0.02% (3)
300 set trims - 0.02% (3)
of the rcdml - 0.02% (3)
receive more like - 0.02% (3)
eharmony matching database - 0.02% (3)
release them make - 0.02% (3)
directly into the - 0.02% (3)
set echo on - 0.02% (3)
to make it - 0.02% (3)
output to egrep - 0.02% (3)
it as soon - 0.02% (3)
would like to - 0.02% (3)
before the bulk - 0.02% (3)
bulk copy and - 0.02% (3)
hop data transfer - 0.02% (3)
here is to - 0.02% (3)
luns to our - 0.02% (3)
name of the - 0.02% (3)
data is a - 0.02% (3)
one hop data - 0.02% (3)
the merge process - 0.02% (3)
article helpful and - 0.02% (3)
shell function and - 0.02% (3)
to the next - 0.02% (3)
the script is - 0.02% (3)
the index files - 0.02% (3)
-ef | grep - 0.02% (3)
other hand, if - 0.02% (3)
in {2..6} do - 0.02% (3)
“dual-write” migration service - 0.02% (3)
of the sql - 0.02% (3)
would be a - 0.02% (3)
tells sqlplus to - 0.02% (3)
mailing list vitaliy - 0.02% (3)
the live data - 0.02% (3)
the sqlplus -s - 0.02% (3)
of the database - 0.02% (3)
the private beta - 0.02% (3)
-s /nolog <this article helpful - 0.02% (3)
and it was - 0.02% (3)
and would like - 0.02% (3)
the same time - 0.02% (3)
you might be - 0.02% (3)
this trace file - 0.02% (3)
like it as - 0.02% (3)
a shell function - 0.02% (3)
go over the - 0.02% (3)
sqlplus -s in - 0.02% (3)
site and our - 0.02% (2)
get_asm | egrep - 0.02% (2)
get the private - 0.02% (2)
in an oracle - 0.02% (2)
new disks to - 0.02% (2)
for a company - 0.02% (2)
sign up for - 0.02% (2)
check if asm - 0.02% (2)
beta as soon - 0.02% (2)
real production data - 0.02% (2)
this script is - 0.02% (2)
parse the asm - 0.02% (2)
asm instance id - 0.02% (2)
very large internet - 0.02% (2)
that operated a - 0.02% (2)
the discussion and - 0.02% (2)
give you a - 0.02% (2)
ssh racdb0${x} /oracle/dba/bin/mntgrp.sh - 0.02% (2)
, 'orcl:data_106' , - 0.02% (2)
for given ts - 0.02% (2)
using lasmdsk.sh: ./lasmdsk.sh - 0.02% (2)
cluster every two - 0.02% (2)
on all but - 0.02% (2)
disks to the - 0.02% (2)
to our rac - 0.02% (2)
the 4 new - 0.02% (2)
cluster using the - 0.02% (2)
mounted on all - 0.02% (2)
racdb0${x} /oracle/dba/bin/mntgrp.sh testgrp - 0.02% (2)
{2..6} do ssh - 0.02% (2)
nodes in the - 0.02% (2)
space consumption was - 0.02% (2)
rac cluster using - 0.02% (2)
'orcl:data_107' , 'orcl:data_108'; - 0.02% (2)
– you can - 0.02% (2)
adding 4x400gb luns - 0.02% (2)
a tablespace data - 0.02% (2)
of 800gb every - 0.02% (2)
framework i am - 0.02% (2)
alter diskgroup prod_data1 - 0.02% (2)
verify that the - 0.02% (2)
the state of - 0.02% (2)
at the rate - 0.02% (2)
data set from - 0.02% (2)
as it’s ready - 0.02% (2)
working with webiv - 0.02% (2)
project_b │   │   - 0.02% (2)
├── project_a │   - 0.02% (2)
bulk copy, and - 0.02% (2)
the information you - 0.02% (2)
to store all - 0.02% (2)
text based notes - 0.02% (2)
in this post - 0.02% (2)
it was the - 0.02% (2)
guarantee it!): eventorex - 0.02% (2)
the rcdml process - 0.02% (2)
spam here i - 0.02% (2)
it’s ready (no - 0.02% (2)
discussion and get - 0.02% (2)
like to get - 0.02% (2)
an easy to - 0.02% (2)
purposes so that - 0.02% (2)
@h1 0700 0930 - 0.02% (2)
do this: sqlplus - 0.02% (2)
│   └── project_c - 0.02% (2)
x_support │   │   - 0.02% (2)
here’s what i - 0.02% (2)
happens to be - 0.02% (2)
to find them - 0.02% (2)
in a single - 0.02% (2)
to do this - 0.02% (2)
me to quickly - 0.02% (2)
all files under - 0.02% (2)
when you can - 0.02% (2)
long term project - 0.02% (2)
– you’ll get - 0.02% (2)
this ensures that - 0.02% (2)
│   ├── project_a - 0.02% (2)
with an x - 0.02% (2)
– answer in - 0.02% (2)
i then use - 0.02% (2)
each of the - 0.02% (2)
will be as - 0.02% (2)
└── x_support │   - 0.02% (2)
project_c │   │   - 0.02% (2)
├── project_b │   - 0.02% (2)
output in a - 0.02% (2)
sysdba @h1 0700 - 0.02% (2)
(no spam here - 0.02% (2)
---------------------------------------------------------------------- select this_.workrequestid - 0.02% (2)
this_.status=:2 for upda - 0.02% (2)
s_ where this_.workrequestid - 0.02% (2)
from dbjobrequests thi - 0.02% (2)
this_.userid as userid1_0_ - 0.02% (2)
this_.status as status1_0_, - 0.02% (2)
this_.event_type as event3_1_0_, - 0.02% (2)
a s createtime1_0_, - 0.02% (2)
as workrequ1_1_0_, this_.createtime - 0.02% (2)
id : gm424t8fyx3w6 - 0.02% (2)
.... : gm424t8fyx3w6 - 0.02% (2)
gm424t8fyx3w6 displayed sql - 0.02% (2)
details from gv$session - 0.02% (2)
i used to - 0.02% (2)
the dba team. - 0.02% (2)
objects from gv$lock - 0.02% (2)
in this example - 0.02% (2)
if you don’t - 0.02% (2)
i guarantee it!): - 0.02% (2)
6 sid ............ - 0.02% (2)
: gm424t8fyx3w6 client - 0.02% (2)
simply do this: - 0.02% (2)
session details from - 0.02% (2)
armed with the - 0.02% (2)
performance problem that - 0.02% (2)
at any scale. - 0.02% (2)
think it’s a - 0.02% (2)
for historical purposes - 0.02% (2)
a simple matter - 0.02% (2)
query against gv$lock - 0.02% (2)
gv$session and gv$sqltext - 0.02% (2)
for upda te - 0.02% (2)
select this_.workrequestid as - 0.02% (2)
:1 and this_.status=:2 - 0.02% (2)
where this_.workrequestid = - 0.02% (2)
dbjobrequests thi s_ - 0.02% (2)
as userid1_0_ from - 0.02% (2)
as status1_0_, this_.userid - 0.02% (2)
as event3_1_0_, this_.status - 0.02% (2)
s createtime1_0_, this_.event_type - 0.02% (2)
workrequ1_1_0_, this_.createtime a - 0.02% (2)
on the state - 0.02% (2)
sqlplus /nolog <every two months. - 0.02% (2)
live data stream - 0.02% (2)
the poorly performing - 0.02% (2)
you can do - 0.02% (2)
sql trace on - 0.02% (2)
a long running - 0.02% (2)
that there is - 0.02% (2)
you get a - 0.02% (2)
when you are - 0.02% (2)
during the merge. - 0.02% (2)
and do the - 0.02% (2)
the following shell - 0.02% (2)
the data load - 0.02% (2)
direct path insert - 0.02% (2)
simple matter of - 0.02% (2)
two sets of - 0.02% (2)
was the most - 0.02% (2)
historical data set - 0.02% (2)
and now we - 0.02% (2)
it’s easy to - 0.02% (2)
of the index - 0.02% (2)
very easy to - 0.02% (2)
is a great - 0.02% (2)
for the same - 0.02% (2)
process format a10 - 0.02% (2)
submitted from and - 0.02% (2)
the job was - 0.02% (2)
of the machine - 0.02% (2)
at the top - 0.02% (2)
the job is - 0.02% (2)
and s.paddr = - 0.02% (2)
p.spid, s.module||' '||s.action - 0.02% (2)
a30 trunc col - 0.02% (2)
trunc col osuser - 0.02% (2)
node the job - 0.02% (2)
format a15 col - 0.02% (2)
999999 col username - 0.02% (2)
col serial# format - 0.02% (2)
sid format 9999 - 0.02% (2)
tab off col - 0.02% (2)
pages 60 set - 0.02% (2)
head on set - 0.02% (2)
is the culprit - 0.02% (2)
find them later - 0.02% (2)
defined by the - 0.02% (2)
during the data - 0.02% (2)
as sysdba set - 0.02% (2)
each data set - 0.02% (2)
the counts of - 0.02% (2)
the objective is - 0.02% (2)
to point to - 0.02% (2)
the primary database - 0.02% (2)
that’s because the - 0.02% (2)
replication is a - 0.02% (2)
hand, if the - 0.02% (2)
as simple as - 0.02% (2)
make the data - 0.02% (2)
the last rehearsal - 0.02% (2)
extract or reload. - 0.02% (2)
so instead of - 0.02% (2)
needs to be - 0.02% (2)
i think it - 0.02% (2)
to think that - 0.02% (2)
us to the - 0.02% (2)
data validation phase - 0.02% (2)
isolate bulk copy - 0.02% (2)
low impact on - 0.02% (2)
faster and easier - 0.02% (2)
not what you - 0.02% (2)
during extract or - 0.02% (2)
the “backfill” data - 0.02% (2)
you want to - 0.02% (2)
focus on the - 0.02% (2)
matching database migration - 0.02% (2)
to get an - 0.02% (2)
snapshot of the - 0.02% (2)
the historical data - 0.02% (2)
the dual-write migration - 0.02% (2)
on the new - 0.02% (2)
the two databases - 0.02% (2)
the “backfill” batch - 0.02% (2)
we are doing - 0.02% (2)
new database cluster - 0.02% (2)
“backfill” batch process - 0.02% (2)
to a shared - 0.02% (2)
sets of data - 0.02% (2)
we create a - 0.02% (2)
exists in the - 0.02% (2)
whole data set - 0.02% (2)
to cassandra back - 0.02% (2)
cassandra back in - 0.02% (2)
the fundamental principles - 0.02% (2)
format a30 trunc - 0.02% (2)
our rac cluster - 0.02% (2)
connection to the - 0.02% (2)
we have a - 0.02% (2)
i need to - 0.02% (2)
and pipe it’s - 0.02% (2)
we execute get_asm - 0.02% (2)
– we execute - 0.02% (2)
| egrep "inst_id|^--|${asmdisk}" - 0.02% (2)
will most likely - 0.02% (2)
note however that - 0.02% (2)
dependent on the - 0.02% (2)
alter session enable - 0.02% (2)
oradebug tracefile_name eof - 0.02% (2)
dump systemstate 266 - 0.02% (2)
and not the - 0.02% (2)
systemstate 266 oradebug - 0.02% (2)
in order to - 0.02% (2)
on the replication - 0.02% (2)
above code in - 0.02% (2)
set serveroutput on - 0.02% (2)
to accomplish this - 0.02% (2)
alter session set - 0.02% (2)
where rownum = - 0.02% (2)
makes it very - 0.02% (2)
outside of the - 0.02% (2)
4x400gb luns to - 0.02% (2)
we were adding - 0.02% (2)
the rate of - 0.02% (2)
consumption was at - 0.02% (2)
and our space - 0.02% (2)
large internet site - 0.02% (2)
operated a very - 0.02% (2)
a company that - 0.02% (2)
is turned on - 0.02% (2)
it – we - 0.02% (2)
at the same - 0.02% (2)
script in the - 0.02% (2)
generated by the - 0.02% (2)
the following sql - 0.02% (2)
we can open - 0.02% (2)
piping it’s output - 0.02% (2)
on size unlimited - 0.02% (2)
on set serveroutput - 0.02% (2)
on set timing - 0.02% (2)
on set time - 0.02% (2)
a special script - 0.02% (2)
this is the - 0.02% (2)
and then execute - 0.02% (2)
a trace file - 0.02% (2)
this session is - 0.02% (2)
waits and calls - 0.02% (2)
out all the - 0.02% (2)
execute on dbms_system - 0.02% (2)
next step is - 0.02% (2)
the database node - 0.02% (2)
of enabling sql - 0.02% (2)
use tkprof to - 0.02% (2)
is doing to - 0.02% (2)
trace file on - 0.02% (2)
calls this session - 0.02% (2)
the waits and - 0.02% (2)
sql trace for - 0.02% (2)
in the previous - 0.02% (2)
next step of - 0.02% (2)
example, lets say - 0.02% (2)
job was submitted - 0.02% (2)
the machine the - 0.02% (2)
it as follows: - 0.02% (2)
doing to a - 0.02% (2)
the udump directory - 0.02% (2)
| grep sqlplus - 0.02% (2)
in the sql - 0.02% (2)
an oracle database - 0.02% (2)
to connect to - 0.02% (2)
grep sqlplus from - 0.02% (2)
be able to - 0.02% (2)
way to start - 0.02% (2)
right before the - 0.02% (2)
show you how - 0.02% (2)
bulk copy begins - 0.02% (2)
there is to - 0.02% (2)
get an explain - 0.02% (2)
| head to - 0.02% (2)
now that we - 0.02% (2)
the number of - 0.02% (2)
is to it! - 0.02% (2)
that’s all there - 0.02% (2)
what we are - 0.02% (2)
to make sure - 0.02% (2)
it would be - 0.02% (2)
-lta | head - 0.02% (2)
the top of - 0.02% (2)
from their terminal. - 0.02% (2)

Here you can find chart of all your popular one, two and three word phrases. Google and others search engines means your page is about words you use frequently.

Copyright © 2015-2016 hupso.pl. All rights reserved. FB | +G | Twitter

Hupso.pl jest serwisem internetowym, w którym jednym kliknieciem możesz szybko i łatwo sprawdź stronę www pod kątem SEO. Oferujemy darmowe pozycjonowanie stron internetowych oraz wycena domen i stron internetowych. Prowadzimy ranking polskich stron internetowych oraz ranking stron alexa.