Oracle Essentials Oracle Database 11g (35 page)

BOOK: Oracle Essentials Oracle Database 11g
10.68Mb size Format: txt, pdf, ePub

Setting the degree to a lowest common denominator value (for example, 2) provided effective parallelism for higher user counts, but did not leverage resources fully when fewer users are active.

Self-tuning adaptive parallelism

Oracle8
i
introduced the notion of
self-tuning adaptive parallelism
. This feature automatically scales down parallelism as the system load increases and scales it back up as the load decreases. When an operation requests a degree of parallelism, Oracle will check the system load and lower the actual degree the operation uses to avoid overloading the system. As more users request parallel operations, the degree they receive will become lower and lower until operations are executing serially. If activity decreases, subsequent operations will be granted increasing degrees of parallelism. This adaptability frees the DBA from the difficult task of trying to determine the optimal degree of parallelism in the face of constant changes in workload.

Adaptive parallelism takes two factors into account in determining the degree of parallelism granted to an operation:

• System load.

• Parallelism resource limitations of the user’s consumer group if the Database Resource Manager is active. (The Database Resource Manager is explained in

Chapter 9
and also later in this chapter.) This is important, because it means that adaptive parallelism respects resource plans if they’re in place.

Partition-Based Parallelism

A small subset of Oracle’s parallel functionality is based on the number of partitions or subpartitions accessed by the statement to be parallelized. For block-range parallelism, the piece of data each PE process works on is a range of blocks. For partition-based parallelism, the pieces of data that drive parallelism are partitions or subpartitions of a table. The operations in which parallelism is based on the number of partitions or subpartitions include the following:

• Updates and deletes

• Index scans

• Index creation and rebuilds on partitioned tables

174

|

Chapter 7: Oracle Performance

Parallelism for partitions and subpartitions of a table

Oracle8 introduced support for parallel Data Manipulation Language (DML), or the ability to execute INSERT, UPDATE, and DELETE statements in parallel. This type of parallelism improves the performance of large bulk operations (for example, an update to all the rows of a very large table).

In Oracle8 the degree of parallelism for updates and deletes is tied to the number of partitions involved, while in Oracle8
i
and beyond the degree of parallelism for updates and deletes is tied to the number of partitions or subpartitions involved. A table with 12 partitions (for example, one partition for each month of the year) can have a maximum number of 12 PEs for an update or delete. An update to only one month of data would have no parallelism because it involves only one partition. If the table were created using Oracle’s composite partitioning (for example, with 4

hash subpartitions by PRODUCT_ID within each month), the maximum degree of parallelism for the entire table would be 48, or 12 partitions with 4 subpartitions each. An update to one month of data could have a degree of 4 because each month contains 4 hash subpartitions. If the table is not partitioned, Oracle cannot perform updates or deletes in parallel.

Oracle8 and later versions can execute index creation, index rebuilds, and index scans for partitioned indexes in parallel using the same semantics as parallel DML: one PE process per partition or subpartition of the index.

Fast full index scans for nonpartitioned tables

People often assume that the Oracle database can parallelize index scans only if the target index is partitioned. Oracle 7.3 introduced the ability to perform parallel index scans on nonpartitioned indexes for a certain case. If the index scan operation were

“unbounded,” meaning that the entire index was going to be accessed to satisfy the query, then Oracle 7.3 and higher would use block-range parallelism to access the entire index in parallel. While Oracle can perform index scans for nonpartitioned indexes, this feature applies to a narrow set of queries. Partition-based index scans apply to a much broader range of queries.

Parallel insert for nonpartitioned and partitioned tables
Oracle can execute an INSERT statement of the form INSERT INTO
tableX

SELECT...FROM
tableY
in parallel for nonpartitioned and partitioned tables.

Oracle uses a set of PE processes executing block-range parallelism for the SELECT

portion of the INSERT statement. These PE processes pass the rows to a second set of PE processes, which insert the rows into the target table. The target table can be a nonpartitioned or partitioned table. Parallelism for an insert is not exactly block-range or partition-based.

Oracle and Parallelism

|

175

Oracle and Memory Resources

Accessing information in memory is much faster than accessing information on a disk. An Oracle instance uses the database server’s memory resources to cache the information accessed to improve performance. Oracle utilizes an area of shared memory called the System Global Area (SGA) and a private memory area for each server process called the Program Global Area (PGA).

Prior to Oracle9
i
, youcould only specify the size for the SGA or any of its components—database buffer cache, shared pool, or large pool—in the initialization file, and the size of these memory allocations could not be changed without shutting down and restarting the instance. Oracle9
i
enabled dynamic resizing of these pools based on a minimum memory allocation called a
granule
. Oracle Database 10
g
and later releases can automatically manage shared memory. Oracle Database 11
g
adds automatic memory management of the SGA and PGA.

Exhausting a database server’s supply of memory will cause poor performance. If you are running an older release of Oracle, you should gauge the size of the various memory areas Oracle uses or add more memory to the machine to prevent a memory defi-cit from occurring. What constitutes the right size for the various areas is a function of your application behavior, the data it uses, and your performance requirements.

How Oracle Uses the System Global Area

Oracle uses the SGA for the following operations:

• Caching of database blocks containing table and index data in the database buffer cache

• Caching of parsed and optimized SQL statements, stored procedures, and data dictionary information in the shared pool

• Buffering of redo log entries in the redo log buffer before they’re written to disk In versions prior to Oracle 9
i
, the amount of memory allocated to each of these areas within the SGA was determined at instance startup using initialization parameters and could not be altered without restarting the instance.

The majority of tuning efforts focused on the database buffer cache and the shared pool.

Automatic sizing for the SGA

Oracle Database 10
g
eliminated manual tuning of SGA pools with automatic sizing for the SGA. Using automatic shared memory management, the database automatically allocates memory for the following SGA pools: database buffer cache, shared pool, large pool, Java pool, and Streams pool. Youhave to specify only the total amount of memory required by setting the SGA_TARGET initialization parameter.

176

|

Chapter 7: Oracle Performance

Since Oracle Database 10
g
, the database proactively monitors the memory requirements for each pool and dynamically reallocates memory when appropriate. Youcan also specify the minimum amount of memory for any of the SGA pools while using automatic SGA sizing using the following initialization parameters: DB_CACHE_

SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE, JAVA_POOL_SIZE, and STREAMS_POOL_SIZE. A few SGA pools, specified using such parameters as LOG_BUFFER, DB_KEEP_CACHE_SIZE, and DB_RECYCLE_CACHE_SIZE, are still manually sized.

The database buffer cache

If youdecide to disable SGA_TARGET by setting it to 0, youwill need to manually set initialization parameters for the memory pools (unless you want to use previous sizes). For the database buffer cache, you would assess the percentage of the database blocks requested by users read from the cache versus from the disk. This percentage is termed the
hit ratio
. If response times are too high and this ratio is lower than 90% (as a rule of thumb), increasing the value of the initialization parameter DB_CACHE_SIZE can increase performance.

You can use Oracle Enterprise Manager to get information about the cache hit ratio.

It’s tempting to assume that continually increasing the size of the database buffer cache will translate into better performance. However, this is true only if the database blocks in the cache are actually being reused. Most OLTP systems have a relatively small set of core tables that are heavily used (for example, lookup tables for things such as valid codes). The rest of the I/O tends to be random, accessing a row or two in various database blocks in the course of the transaction. Because of this, having a larger buffer cache may not contribute to performance since there isn’t much reuse of data blocks occurring.

In addition, not all operations read from the database buffer cache. For example, large full-table scans are limited to a small number of buffers to avoid adversely impacting other users by dominating the cache. If your application performs a lot of table scans, increasing the buffer cache may not help performance because the cache will not contain the needed data blocks. Parallel table scans completely bypass the buffer cache and pass rows directly to the requesting user process. As with most performance issues, your understanding of how your application is actually using your data is the key that will help guide your database buffer-cache tuning.

The shared pool

The shared pool is used at several points during the execution of every operation that occurs in an Oracle database. For example, the shared pool is accessed to cache the
Oracle and Memory Resources

|

177

SQL sent to the database and for the data dictionary information required to execute the SQL. Because of its central role in database operations, a shared pool that is too small may have a greater impact on performance than a database buffer cache that is too small. If the requested database block isn’t in the database buffer cache, Oracle will perform an I/O to retrieve it, resulting in a one-time performance hit.

A shared pool that is too small will cause poor performance for a variety of reasons, affecting all users. These reasons include the following:

• Not enough data dictionary information can be cached, resulting in frequent disk access to query and update the data dictionary.

• Not enough SQL can be cached, leading to memory “churn,” or the flushing of useful statements to make room for incoming statements. A well-designed application issues the same statements repeatedly. If there isn’t enough room to cache all the SQL the application uses, the same statements get parsed, cached, and flushed over and over, wasting valuable CPU resources and adding overhead to every transaction.

• Not enough stored procedures can be cached, leading to similar memory churn and performance issues for the program logic stored and executed in the database.

If you are manually managing the shared pool and you’ve diagnosed which of these problems is occurring, the solution is fairly simple: increase the size of the shared pool using the SHARED_POOL_SIZE initialization parameter. Shared pool sizes in the 150–250 MB range are not uncommon for large, active databases. For more information about examining shared pool activity to identify problems, see the appropriate
Oracle Performance Tuning Guide
, as well as the third-party books listed in
Appendix B.

The redo log buffer

While the redo log buffer consumes a very small amount of memory in the SGA relative to the database buffer cache and the shared pool, it’s critical for performance.

Transactions performing changes to the data in the database write their redo information to the redo log buffer in memory. The redo log buffer is flushed to the redo logs on disk when a transaction is committed (normally) or when the redo log buffer is one-third full. Oracle “fences” off the portion of the redo log buffer that’s being flushed to disk to make sure that its contents aren’t changed until the information is safely on disk. Transactions can continue to write redo information to the rest of the redo log buffer (the portion that isn’t being written to disk and therefore isn’t fenced off by Oracle). In a busy database, transactions may generate enough redo to fill the remaining unfenced portion of the redo log buffer before the I/O to the disks for the fenced area of the redo log buffer is complete. If this happens, the transactions will have to wait for the I/O to complete because there is no more space in the redo log buffer. This situation can impact performance. The statistic “redo buffer allocation
178

Other books

The Second Trial by Rosemarie Boll
See Me in Your Dreams by Patricia Rosemoor
Running with the Pack by Mark Rowlands
Deviant by Jaimie Roberts
Whistlestop by Karl J. Morgan
A Silent Fury by Lynette Eason
River Runs Deep by Jennifer Bradbury