Oracle SQL Tuning Pocket Reference By Mark GurryPublisher: O'Reilly Pub Date: January ISBN: •T. Oracle SQL Tuning Pocket Reference – FreePdfBook. Oracle Graph Developers Guide · Oracle Spatial And Graph RDF Semantic Graph Developers Guide. Oracle SQL Tuning Pocket Reference - pdf - Free IT eBooks Nice pocket reference, with good examples for tuning Oracle queries. Oracle SQL Tuning Pocket.

Author:ILEANA ALEGRIA
Language:English, Spanish, French
Country:Iceland
Genre:Environment
Pages:552
Published (Last):18.09.2016
ISBN:334-7-15500-122-6
Distribution:Free* [*Register to download]
Uploaded by: LISANDRA

77276 downloads 105179 Views 26.32MB PDF Size Report


Oracle Sql Tuning Pocket Reference Pdf

safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this Changes in This Release for Oracle Database SQL Tuning Guide. Changes Performing a SQL Access Advisor Quick Tune. Index of /pdf/Gentoomen Library/Networking/Oracle/ O'Reilly Oracle SQL Tuning Pocket medical-site.info Mar One of the most important challenges faced by Oracle database administrators and Oracle developers is the need to tune SQL statements so that they execute.

The only difference between this statement and the last is that this statement is after a range of accounting periods for the financial year rather than just one accounting period. Once again, we timed the statement using the index selected by the cost-based optimizer against the index that contained all of the columns, and found the larger index to be at least three times faster. This is fine, but often sites are using third-party packages that can't be modified, and consequently hints can't be utilized. However, there may be the potential to create a view that contains a hint, with users then accessing the view. A view is useful if the SQL that is performing badly is from a report or online inquiry that is able to read from views. As a last resort, I have discovered that sometimes, to force the use of an index, you can delete the statistics on the index. Often, the execution plan will change to just the way you want it, but this type of practice is approaching black magic. It is critical that if you adopt such a black magic approach, you clearly document what you have done to improve performance. In summary, why does the cost-based optimizer make such poor decisions? First of all, I must point out that poor decision-making is the exception rather than the rule. The examples in this section indicate that columns are looked at individually rather than as a group. If they were looked at as a group, the cost-based optimizer would have realized in the first example that each row looked at was unique without the DBA having to rebuild the index as unique. The second example illustrates that if several of the columns in an index have a low number of distinct values, and the SQL is requesting most of those values, the cost-based optimizer will often bypass the index. This happens despite the fact that collectively, the columns are very specific and will return very few rows. In fairness to the optimizer, queries using indexes with fewer columns will often perform substantially faster than those using an index with many columns.

ORA file. I was at one site that ran the following update command and got a four-minute response, despite the fact that the statement's WHERE clause condition referenced the table's primary key. The statement performed well when the statistics were removed and the rule-based optimizer was used.

After much investigation, we decided to check the INIT. ORA parameters. Hash joins often will not work unless this parameter is set to at least 1 megabyte. It has a default of TRUE, and usually doesn't need to be set. For example, setting the parameter to 8. Some of the major improvements that have occurred with the various Oracle versions include: 8. This parameter defaults to 0, with a range of 0 to Some sites have reported performance improvements when this parameter is set to It has a default of Sites report performance improvements when the parameter is set to between 10 and 50 for OLTP and 50 for decision support systems.

Adjusting it downwards may speed up some OLTP enquiries, but make overnight jobs run forever. If you increase its value, the reverse may occur. This is different from the Cartesian join that usually occurs for star queries.

Set it to TRUE. Partition views were the predecessor to Oracle partitions, and are used very successfully by many sites for archiving and to speed performance.

However, reducing the permutations can cause an inefficient execution plan, so this parameter should not be modified from its default setting. This is achieved by translating similar statements that contain literals in the WHERE clause into statements that have bind variables. The same index was the ideal candidate for the following statement, which was one of the statements run frequently during end-of-month and end-of-year processing: The only difference between this statement and the last is that this statement is after a range of accounting periods for the financial year rather than just one accounting period.

Once again, we timed the statement using the index selected by the cost-based optimizer against the index that contained all of the columns, and found the larger index to be at least three times faster. The new index order was as follows: This is fine, but often sites are using third-party packages that can't be modified, and consequently hints can't be utilized. However, there may be the potential to create a view that contains a hint, with users then accessing the view.

A view is useful if the SQL that is performing badly is from a report or online inquiry that is able to read from views.

Oracle SQL Tuning Pocket Reference – FreePdfBook

As a last resort, I have discovered that sometimes, to force the use of an index, you can delete the statistics on the index.

Often, the execution plan will change to just the way you want it, but this type of practice is approaching black magic.

It is critical that if you adopt such a black magic approach, you clearly document what you have done to improve performance. In summary, why does the cost-based optimizer make such poor decisions? First of all, I must point out that poor decision-making is the exception rather than the rule. The examples in this section indicate that columns are looked at individually rather than as a group. If they were looked at as a group, the cost-based optimizer would have realized in the first example that each row looked at was unique without the DBA having to rebuild the index as unique.

The second example illustrates that if several of the columns in an index have a low number of distinct values, and the SQL is requesting most of those values, the cost-based optimizer will often bypass the index. This happens despite the fact that collectively, the columns are very specific and will return very few rows. In fairness to the optimizer, queries using indexes with fewer columns will often perform substantially faster than those using an index with many columns.

Joining Too Many Tables Early versions of the cost-based optimizer often adopted a divide and conquer approach when more than five tables were joined. Consider the example shown in Figure The company has several branches, and the request is just for the branches in Washington State WA. Figure A join of seven tables The query expects to return just a handful of rows from the various tables, and the response time should be no longer than one second.

However, because so many tables are being joined, the cost-based optimizer will often process F and G independently of the other tables and then merge the data at the end. The result of joining F and G first is that all address that are in the state of Washington must be selected. That process could take several minutes, causing the overall runtime to be far beyond what it would have been if Oracle had driven all table accesses from the A table.

This will speed the performance significantly. Interestingly, the rule-based optimizer often makes a bigger mess of the execution plan when many tables are joined than does the cost-based optimizer.

The rule-based optimizer often will not use the ACCT table as the driving table. If you are using a third-party package, your best option may be to create a view with a hint, if that is allowable and possible with the package you are using. ORA Parameter Settings Many sites utilize a pre-production database to test SQL performance prior to moving index and code changes through to production. Ideally the pre-production database will have production volumes of data, and will have the tables analyzed in exactly the same way as the production database.

The pre-production database will often be a copy of the actual production datafiles. When DBAs test changes in pre-production, they may work fine, but have problems with a different execution plan being used in production. How can this be? The reason for a different execution plan in production is often that there are different parameter settings in the production INIT.

Oracle.sql Tuning Pocket Reference

ORA file. I was at one site that ran the following update command and got a four-minute response, despite the fact that the statement's WHERE clause condition referenced the table's primary key. The statement performed well when the statistics were removed and the rule-based optimizer was used. After much investigation, we decided to check the INIT.

ORA parameters. Other parameters include the following: Hash joins often will not work unless this parameter is set to at least 1 megabyte. It has a default of TRUE, and usually doesn't need to be set. For example, setting the parameter to 8. Some of the major improvements that have occurred with the various Oracle versions include: This parameter defaults to 0, with a range of 0 to Some sites have reported performance improvements when this parameter is set to It has a default of Sites report performance improvements when the parameter is set to between 10 and 50 for OLTP and 50 for decision support systems.

Adjusting it downwards may speed up some OLTP enquiries, but make overnight jobs run forever. If you increase its value, the reverse may occur. This is different from the Cartesian join that usually occurs for star queries. Set it to TRUE. Partition views were the predecessor to Oracle partitions, and are used very successfully by many sites for archiving and to speed performance.

However, reducing the permutations can cause an inefficient execution plan, so this parameter should not be modified from its default setting. This is achieved by translating similar statements that contain literals in the WHERE clause into statements that have bind variables. We suggest that you consider setting this parameter to SIMILAR with Oracle9i only if you are certain that there are lots of similar statements with the only differences between them being the values in the literals.

It is far better to write your application to use bind variables if you can. FORCE should not be used. If you specifically set this parameter, the subquery becomes the driving query. Setting the parameter causes a merge or hash join rather than the ugly and time-consuming Cartesian join that will occur with standard NOT IN execution.

Remember that if any of these parameters are different in your pre-production database than in your production database, it is possible that the execution plans for your SQL statements will be different.

Make the parameters identical to ensure consistent behavior.

Advanced SQL Tuning Features of Oracle Database 11g - PDF Drive

It is important that you are aware of these problems and avoid them wherever possible. Table lists the problems and their occurrence rates. Most such problems are caused by having a function on an indexed column. Oracle8i and later allow function-based indexes, which may provide an alternative method of using an effective index.

In the examples in this section, for each clause that cannot use an index, I have suggested an alternative approach that will allow you to get better performance out of your SQL statements. Do not use: Remember that indexes can tell you what is in a table but not what is not in a table. All references to NOT,!

Like other functions, it disables indexes. For function indexes to work, you must have the INIT. You must also be using the cost-based analyzer. The statement in the following example uses a function index: Oracle automatically performs a simple column type conversion, or casting, when it compares two columns of different types. This statement will actually be processed as: Programs that are not performing up to expectation may have a casting problem.

Indexes Are Missing or Inappropriate While it is important to use indexes to reduce response time, the use of indexes can often actually lengthen response times considerably. I am astounded at how many tuners, albeit inexperienced, believe that if a SQL statement uses an index, it must be tuned. You should always ask, "Is it the best available index? This must be considered when adding or modifying indexes.

The table on the left side in Figure shows the index entries with the corresponding physical addresses on disk.

The lines with the arrows depict physical reads from disk. Notice that each row accessed has a separate read. Physical reads caused by an index A full table scan is typically able to read over rows of table information per block. You may be reading rows with each physical read from disk. In comparison, an index will potentially perform one physical read for each row returned from the table. The exception to this rule is if the entire query can be satisfied by the index without the need to go to the table.

In this case, an index lookup can be extremely effective. The response time was critical for answering online customer inquiries. There is the tradeoff that this index now has to be maintained, but the benefits at this site far outweighed the costs. The result was that the index entirely satisfied the query, and the table did not need to be accessed. Another common problem I notice is that when tables are joined, the leading column of the index is not the column s that the tables are joined on.

What you really want to have are indexes such as the following: Yet another common problem that I see is small tables that don't have any index at all. I quite often hear heated debates with one person saying that the index is not required because the table is small and the data will be stored in memory anyway.

They will often explain that the table can even be created with the cache attribute. My experience has been that every small table should be indexed. The two reasons for the index are that the uniqueness of the rows in the table can be enforced by a primary or unique key, and, more importantly, the optimizer has the opportunity to work out the optimal execution plan for queries against the table.

The example in the following table shows that the response time of a particular query went from seconds elapsed down to The most important thing about not having the index is that the optimizer will often create a less than optimal execution plan without it. Single-column index merges are bad news in all relational databases, not just Oracle. They cause each index entry to be read for the designated value on both indexes. Consider the following example, which is based on a schema used by a well-known stock brokerage: By the way, there is a much lower number of N's.

The good news is that Oracle has an easy way around this problem: ORA parameters intact, there is a definite bias towards using nested loops for table joins. Nested loops are great for online transaction processing systems, but can be disastrous for reporting and batch processing systems. The rule-based optimizer will always use a nested loop unless prompted to use other methods by hints, or by other means such as dropping all indexes off the tables. Online screens should definitely use nested loops, because data will be returned immediately.

Typically a screen will buffer 20 rows and stop retrieving until the user requests the next set of data. If effective indexes are in place, a typical response time for getting a set of data will be a second or so. To perform a hash join, a hash table is created in the memory of the smallest table, and then the other table is scanned. The rows from the second table are compared to the hash. A hash join will usually run faster than a merge join involving a sort, then a merge if memory is adequate to hold the entire table that is being hashed.

The entire result set must be determined before a single row is returned to the user. Therefore, hash joins are usually used for reporting and batch processing. Many DBAs and developers blindly believe that a hash join is faster than a merge join.

This is not always the case. ORA parameters listed and their impact on the optimizer decision making. The fact is that each can be faster than the other under certain circumstances. This section lists examples from real-life sites that may assist you in determining which construct is best for a given situation.

However, there are always exceptions to the rule. The reason joins often run better than subqueries is that subqueries can result in full table scans, while joins are more likely to use indexes. Joins are more likely to use indexes. The following query, which returns the same result, executes much faster: The answer is that either can be faster depending on the circumstance.

If EXISTS is used, the execution path is driven by the tables in the outer select; if IN is used, the subquery is evaluated first, and then joined to each row returned by the outer query.

Notice that the table in the subquery is accessed first, and that drives the query: The exception is when a very small number of rows exist in the table in the subquery, and the table in the main query has a large number of rows that are required to be read to satisfy the query. The following example uses a temporary table that typically has only 2, rows. The table is used in the subquery. The outer table has over 16,, rows.

In this example, the subquery is being joined to the main table using all of the primary key columns in the main table. Next is the IN-based version of the same query. Notice the greatly reduced elapsed execution time: These hints allow Oracle to return the rows in the subquery only once. The same effect can be obtained by setting the INIT.

Unnecessary Sorts Despite a multitude of improvements in the way that Oracle handles sorts, including bypassing the buffer cache, having tablespaces especially set up as type temporary, and using memory more effectively, operations that include sorts can be expensive and should be avoided where practical. The operations that require a sort include the following: There are also things that you can do in your SQL to avoid sorts, discussed in the following sections.

The UNION clause forces all rows returned by the different queries in the UNION to be sorted and merged in order to filter out duplicates before the first row can be returned to the calling module.

Indexes are stored in ascending order by default. If the columns in your ORDER BY clause are in the same sequence as the columns in an index, forcing the statement to use that index will cause the data to be returned in the desired order.

One advantage of eliminating the sort in an online application is that the first screenful of rows can be returned quickly. This could take a considerable amount of time, and is not desirable behavior in an OLTP environment. Too Many Indexes on a Table I've visited sites that have a standard in place saying that no table can have more than six indexes.

This will often cause almost all SQL statements to run beautifully, but a handful of statements to run badly, and indexes can't be added because there are already six on the table. In such cases, DBAs often suggest dropping the first two indexes because they are redundant; i. Dropping redundant indexes, however, may cause problems with the selection of a new driving table on a join using the rule-based optimizer see the "What the RBO rules don't tell you" sections earlier in this book.

There is far less risk associated with dropping redundant indexes when the cost-based optimizer is being utilized. Having lots of indexes on a table will usually have only a small impact on OLTP systems, because only a few rows are processed in a single transaction, and the impact of updating many indexes is only milliseconds.

Having lots of indexes can be extremely harmful for batch update processing, with its typically high number of inserts, updates, and deletes. Table demonstrates this. Impact of multiple indexes on insert performance Number of inserts and indexes Runtime Inserting rows with 0 indexes 1. Oracle has added lots of functionality to help speed index rebuilds. Despite these enhancements, tables may get to a size at which the index rebuild process takes longer than running a batch update with the indexes intact.

My recommendation is to avoid rules stating that a site will not have any more than a certain number of indexes. Oracle9i adds some great new functionality that allows you to identify indexes that are not being used. Take advantage of it to identify and remove unused indexes.

The bottom line is that all SQL statements must run acceptably. If it requires having 10 indexes on a table, then you should put 10 indexes on the table. We have found statements similar to the following at several sites: Wouldn't it be nice to make the statement go faster?

Luckily, the fix is simple. If you have access to change the code i. Tables and Indexes with Many Deletes Oracle is similar to many other databases in that there are performance issues with deletes. Oracle has a high water mark, which represents the highest number of rows ever inserted into the table.

This high-water mark can have an impact on performance. Consider the following example, which takes 5. The cost-based optimizer would have performed a full table scan. Now let's delete all of the rows, so that the result is an empty table: This is because, when performing a full table scan, Oracle reads as far as the table's high-water mark, and the high-water mark has not changed. Let's count the rows again using the index: This is because the index entries are logically deleted, but still exist physically.

To avoid the types of performance problems I've just demonstrated, my recommendation is to rebuild a table, and its indexes, whenever the table has undergone many deletes. If index columns are frequently updated, you should also re-build the indexes, because an update forces a logical delete in the index followed by an insert of the new, updated entry.

Some sites go as far as rebuilding indexes nightly when they have a lot of logical delete activity. You should consider regular rebuilds of indexes on these tables. Heavy Usage of Views Another common problem I see is heavy usage of views of views, which can totally confuse both optimizers as well as the person trying to work out how to tune the resulting monstrosity.

Keep in mind that using hints in views of views will often not give consistent and good performance. Using hints on the outer view is preferable to using hints on the inner view.

Joining Too Many Tables Joining more than five tables will almost always confuse both optimizers, and produce a poor execution plan.

See Section 1. If you are lucky enough to be able to change the SQL to use hints, you can overcome the problem. Joining more than five tables frequently in an application usually points to not enough performance consideration at the design stage, when the logical data model was translated into a physical data model.

These times are typical of a medium- to high-end machine. The results of this statement appear as follows: Horse Firsts Seconds Thirds Wild Charm 1 2 2 The alternative statement without the decode involves scanning the table three times, rather than once, as in the previous statement.

Want to use Google search more efficiently and Database Searching This LibGuide will be allow you to follow an evidence based or research topic through the various stages of searching. Smart Searching Techniques for Online Resources Search Terms joined by Boolean Operators Nesting Phrase Searching Searching a Database Different methods exist for accessing each database, and every system has its own searching style and entry mechanisms.

Objects data stored everywhere like in libraries, hospitals, warehouses, institutes and in different databases have to be searched and retrieved. Database searching is different from Google searching.

Library Database Searching Techniques. Which database? Depending on your information need, you may use more than one database for your research. A skillful searcher can also analyze the reasons for failed searches and make intelligent adjustments.

Because the database knows how values are sorted, it can apply more sophisticated search algorithms than just looking for the value from start to finish.

Why use Boolean operators? To focus a search, particularly when your topic contains multiple search terms. This also may be referred to as proximity searching.

Do you want to remove all your recent searches? All recent searches will be deleted Full-text search engines evolved much later than traditional database engines, as corporations and governments found themselves with more and more unstructured textual data in electronic format. Overall, about N-linked glycopeptides were identified deamidation of N and match motif N-!

P by using Maxquant for database searching against the homo sapiens database downloaded Tips and Techniques for Effective Searching Blended librarian Stephanie Jacobs has created a new video about the basics of article databases and searching them. For You Explore.

If you'd like additional searching help, you can contact one of our librarians. Search Database - ASP. To conduct a precise and thorough search you may need to use a variety of techniques, from Boolean logic to Introduction to Advanced Searching Techniques In order to conduct a thorough search, you may need to use a combination of operators and search terms.

What is Web of Science? Web of Science is a unified research platform that can help you quickly find, analyze and share information in the sciences, social sciences, art and humanities that connects you to a wide variety of content. Search algorithms can be classified based on their mechanism of searching. Begin your search by reading the various database descriptions to find out which database deals with your subject.

We sort the items on a list into alphabetical or numerical order. You can apply these techniques to just about any database, from PubMed to Google. Phrase Searching means searching for two or more words as an exact phrase. It is important to note that there is no best tool for searching the web. Tip 5: Find quick answers. Search statements are used when searching catalogs, databases, and search engines to find books, articles, and websites.

Creating an effective search can take some practice, but here are some basic searching tips that may help you find what you need. Therefore, it is important to use the search syntax or searching rules that will provide you with the best results.

It's important to see if there are any controlled vocabulary terms e. Databases usually include a thesaurus or list of subject terms. The Access form see Figure 1 that drives the database search that I am outlining can be imported into your database. Problems involved in searching multiple data bases are discussed EBSCOhost will assume adjacency searching, meaning words are searched in that exact order.

Techniques for evaluating the content of and interface for a database. The binary search algorithm is a simple example of a searching algorithm for sorted lists and reduces the maximum search time from O n to O log n.

This may be useful in helping new students or those who may need a refresher on the basics. The search tool then searches every table or linked table in that database. How to search the Pace University Library databases.

A key to getting good search results is to use common search techniques that you can apply to almost any database, including article databases, online library catalogs and even commercial search engines. Take heart! Start studying Advanced Database Searching Techniques. Commands for adjacency searching differ among databases, so make sure you consult database guides. Searching for a database object name or object definition is a bit easier than searching for specific text.

Database Searches. Below are some tips for searching databases. These new text documents didn't fit well into the old table-style databases, so the need for unstructured full-text searching was apparent. Boolean Searching Most databases allow the user different searching meth-ods. Search techniques are similar for all electronic resources, whether you are using an internet Database Searching: Running the Search. Invalid password This short video explains some basic searching techniques to use when searching library databases.

The Modern Language Association MLA International Bibliography database is an index to journal articles, books, and more, published on languages, literatures, folklore, film, and linguistics.

Kumpulan 1150+ Link Ebook Pemrograman Gratis (Sedot Semua!)

Introduction to keyword searching Keyword searching is the process of choosing search terms and entering them into the database search boxes to locate information on your topic. If you would like a copy of the results, please enter an email address: General Information. Some database structures are specially constructed to make search algorithms faster or more efficient, such as a search tree, hash map, or a database index.

This course examines three major categories of issues related to information access and retrieval. There are two different types of research techniques: scientific and historical.

As we see in the example above, we are using nesting alongside the phrase searching technique. Finding a Text String. For many searches, Google will do the work for you and show an answer to your question in the search results. To switch from phrase to keyword searching, try putting the individual words in separate boxes if available or separate them with the word "and".

November 7th, Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor nostrud exercitation ullamco laboris nisi ut.

See Boolean searches. Use the help function for each specific database or search engine for more information on proximity searching. Where databases are more complex they are often developed using formal design and modeling techniques. But most people may not be using Google search to its full potential. This database allows researchers to search by Author or Keyword and limit results by year. This includes formulating a search strategy, running the search on a number of databases such as Web of Science, Scopus, PubMed and many more.

Students use it for school, business people use it for research, and millions more use it for entertainment. Microsoft Excel can be used to create searchable databases because the structure of a spreadsheet makes it easy to create databases.

Several databases use check tags, which are frequently searched terms that have been identified to bypass the mapping screen to help you see your results faster. This searches across many databases at once. The purpose of both techniques are to use a logical approach to obtain information about a specific subject. Advanced search techniques Searching for information at postgraduate level has to be precise and thorough. For finding accurate, useful information quickly, the web is generally no match for database and catalog searching.

As with any search strategy, you may want to consider what subject searching is most useful for and anything you should be cautious about when using this searching technique: Search box most important element on web pages specially contented management sites. While the validity of such a Click on the "Boolean Searching" page for more information. In this editorial, we present only general searching techniques.

If unsuccessful, try removing one of more searched elements in case a number has been transposed or name misspelled. UCLA Library Research Guides We have research guides for every subject on campus, and every one has a page linking to the major databases in that field, often with advice or tips. In a bitmap join index, the bitmap for the table to be indexed is built for values coming from the joined tables.

Heuristic searching tools are designed to aid the user in learning, discovering or problem solving through self-educating techniques i. Within a database or online catalog, subject searching allows you to search by categories, which are found in the subject field of an item record.

There are several methods that can be used.