Above is an illustration of a database and an instance of an Oracle database server. (It’s possible to have more than one instance of a database on a server as they run in separate memory areas.)
Instance = Memory (Shared Global Area) + Background processes. When an instance of a database is started (sql>startup), it runs on an allocated part of the system memory. The instance runs background processes and starts serving users.
With Oracle there are both logical and physical structures. The physical structure can be altered without affecting the logical structure, such as changing the name of the datafile or its path. In database theory this is known as Physical Data Independence.
A database contains both the control files and online redologs.
The Oracle DBMS attempts to carry out all transactions in the main memory. This is due to a big difference in the performance cost of carrying out the same transactions in disk storage. When Oracle starts, it splits its allocated memory into areas for different uses.
Some of the areas are used for execution plans, some for storing data and all are made available for users.
Area where changes can be made while the database is running:
- Database Buffer Cache
- Shared pool
- Large pool
- Java pool
- Streams pool
If the memory_target and memory_max_target fields have been configured, Oracle will automatically manage memory. Configuration of the mentioned fields, at the same time, prevents memory usage falling below set levels.
System Global Area (SGA) / Shared Global Area
The SGA is reserved memory, shared among all processes. The data kept in the SGA is used both in read & write transactions and is accessible by DBMS users.
Each server instance has its own SGA.
Whenever a server instance starts, it gets allocated an amount of memory which later gets deallocated when the instance stops.
Program Global Area (PGA)
Unlike the SGA, the Program Global Area (PGA) is not shared memory. It is used for the server, background process and control data.
It gets allocated when a server or background process starts. Each server process and background process has its own PGA.
In terms of database performance, the cache_hit_percent value should be aimed to be over 90 percent.
Transactions such as order by, group by and rollup are attempted to be carried out in the PGA. If the PGA is insufficient they are executed in the temp tablespace. A server process is instantiated for each user and accordingly has a PGA allocated.
Can be considered as a stored variable, as an SQL statement, with a placeholder that should be replaced with a parameter before execution. The actual variable is an alternative to literals which are types of constants.
The reason why bind variables are used is mainly to improve a database’s performance by removing the parsing load from a CPU and memory. PL/SQL statements support bind variables in the background without a user having to factor it in when writing statements.
In Dedicated Mode, the SGA and PGA are separate.
With Shared Mode, the PGA is within the SGA memory. A Large Pool area can optionally be configured.
- Run once in the PGA are referred to as Optimal Executions.
- Run twice are known as One Pass Executions.
- Run more than twice, executions get called Multipass Executions.
- It’s best to keep Multipass (Red) executions to a minimum.
All tables whose names start with “v$” are kept in v$fixed_table.
A size_factor of 1 in the “default” of the name column is the amount of buffer_cache.
A size_factor of 1 in the “keep” of the name column indicates the keep amount.
It’s possible to view the amount of RAM in ESTD_PHYSICAL_READS within SIZE_FOR_ESTIMATE.
ESTD_LC_TIME_SAVED indicates the amount of time saved in the Library Cache.
As the amount of RAM allocated in the RAM increases so does the time saved, it isn’t possible to allocate more than the limited amount of available RAM.
ESTD_PGA_CACHE_HIT_PERCENTAGE should ideally be at 100% and a minimum of over 90%.
The database buffer cache is an area of memory shared with users. It contains a copy of the blocks of data read from the Datafile.
When a users enters a query using a select statement, the Database Buffer Cache (DBC) first gets checked.
If the requested data is found there, known as a cache hit, the data is read from the DBC and passed to the user.
If the data has not been located in the DBC (a cache miss), the data is extracted from the datafile, copied to the DBC and given as a result to the user.
In order to copy the data from the datafile to the DBC, there needs to be free memory available. If there isn’t any available memory, memory is freed and then copied. To free the required memory, a Least Recently Used (LRU) cache algorithm is utilized and not FIFO (First In First Out) or other similar techniques.
A SQL statement such as:
doesn’t copy the entire employees table to the DBC, only the records that satisfy the where condition.
appears as buffer cache = db_cache_size
Note: If the buffer cache wait event is too long the buffer cache is increased.
The average logical and physical read should be 90% or above. The physical read should be as low as possible.
Tables that are read or are written to the most frequently can be kept in the buffer cache.
The Redo Log Buffer contains all changes (such as Insert, Update, Delete, Create, Alter, Drop) made to a database. This data is necessary when recovery transactions are required.
The Redo Logs have a circular layout, logically. The Log Writer’s, LGWR in abbreviated form, background process writes this data to the Redo Log file. (Version 9i has a default value of 512K, as such it is suggested to increase the Alert Log file size 5 to 10MB depending on the system.) log_buffer = redo log buffer
The shared pool is an area within the SGA and contains:
- data dictionary cache
- library cache
Data Dictionary Cache
A Data Dictionary contains reference information regarding the database structure and users, such as tables and views.
While SQL statements are parsed (processed), the data dictionary can be accessed frequently which is why keeping it in the main memory is important for maintaining database performance.
Contains the Shared SQL area, where SQL statement parsing information and execution plans are kept. When 2 users each concurrently enter the same SQL statement, Oracle recognizes this and instead of repeating the same parsing process the DBMS uses information from the Library Cache. This is known as soft parsing.
When a new SEL statement gets parsed, Oracle utilizes an amount of memory from the shared pool depending on the statement’s level of complexity.
The same applies to the PL/SQL code or parts of a program such as procedures, functions, packages, anonymous blocks and triggers.
It uses a Least Recently Used (LRU) algorithm. After receiving statistics (DMBS_STATS) it flushes the shared pool.
A reparse is necessary to prevent an object altered by an SQL statement becoming invalid.
If the global database name is changed, the database administrator can manually
The BC (buffer cache) caches data as well as shared pool execution plan queries.
Query control, table column access rights control, followed by processing parses.
Deciding on whether to do a full table scan or use an index.
If the same queries are entered they don’t need to be processed again.
For information the v$sgastat view can be queried.
Is an optional memory area for:
- Databases that operate in shared mode
- I/O Server processes in applications that use the Oracle XA connectivity interface
Can be activated for Backup Restore processes that need extra memory.
If the large pool hasn’t been configured, rman uses the shared pool which has a negative impact on performance.
On multicore systems parallel SQL could be used (see Oracle Hints for more help).
If the shared server mode is active, there should be more room for a large pool.
A memory area for Java code and Java Virtual Machine’s (JVM) data.
The Java_pool adviser outputs the projected change, which includes statistics, in parse ratings for the library cache (used by Java) when the Java pool is changed by a user. (If the statistics_level = typical or at a higher level)
An area out of the SGA, reserved for replication. (In structures such as Dataguard or golden gate). To view the latest SGA values
Automatic memory management in 11G
Automatic memory management in 10G
Oracle processes can be divided into 2 main groups:
1. Processes that run when an application is opened or those that start through the conditions of entered SQL statements (Java applications, SQL*Plus or Toad, etc.).
2. Processes that the DBMS itself starts (server and background processes).
When a user uses SQL*Plus, Toad (Tool for Oracle Application Developers) or another application, the DBMS instantiates server processes in order to run User Processes and the code run by User processes. Additionally, there is an instance in Background Processes.
Asynchronous disk read/write and other necessary operations are managed by Background Memory management.
Processes can request changes according to how the database, operating system and Oracle’s settings have been configured.
An example of this is with databases that aren’t in archive log mode, meaning that the archwr process isnt in use. With servers working in dedicated server mode, a server process is started for every user process.
Servers that are in Shared Mode don’t start a server process for every user process. The Dispatcher holds session requests in a pool and when it receives many requests it executes them in a server process. Occasionally applications are on the same machine as the Oracle database, in this cases such as these to reduce the workload on the DBMS it excutes them in the same process.
The Server processes corresponding to the user process executes one or more of the transactions listed below:
-Running Parse and SQL statements.
-Reading Data Blocks of Datafiles and copying them to the SGA.
Background processes in non-RAC or ASM instances
– Database Writer process (DBWn)
– Log Writer process (LGWR)
– CheckPoint process (CKPt)
– System Monitor (SMON)
– Process Monitor (PMON)
– Recover process (RECO)
– Job Queue process
– Archiver Process (ARCn)
– Queue Monitor process (QMNn)
It is possible to view background process information with v$bgprocess.
Some BG processes start automatically when an instance starts and some only when required.
To view Background processes enter the following:
This process is responsible for writing dirty data in the Database Buffer Cache to Disk, dirty data being data that has been altered and not committed to disk. For many systems a single dbw is enough but this number can be increased if required. If a database has many write transactions, by increasing the number of DBW processes the transaction performance can be increased. On single core systems it isnt practical to increase the number of DBWs.
When a buffer area in the DBC has been changed and committed it is marked as dirty and added to the LRU (Least Recently Used) Write list, which is kept in the System Change Number (SCN) queue. This queue is made compatible with the Redo queue presetn in the Redo Log.
When there is no space left in the DBC, i.e. when the throsehold has been reached, the DBW process writes the dirty buffer to the datafile accowrding to the LRUW list.
Frequently accessed data blocks are kept in the Buffer Cache through use of the LRU algorithm. What gets written to disk has a high probability of being least accessed.
To view the System Change Number:
The Log Writer Process is responsible for writing the data in the Redo Log Buffer to the Redo Log files. This occurs in the following situations:
- When a transaction is committed (when space is needed in the Redo Log buffer, data can occasionally be written to file before a commit)
- When the Redo Log Buffer is 1/3 full
- Before the Database Write Process writes to disk
- Every 3 seconds
As the DBW process writes the DBC dirty buffer data to disk, the Checkpoint Process writes the changed block information to a control file and writes the datafiles’ headers.
This is known as a Checkpoint position. It is used to ensure that the data blocks have been written from memory to disk in a organized way, this prevents loss of data if a server instance or a database error occurs.
The time needed for Instance Recovery operations is thus reduced. It also ensures that all data that has been committed during the shutdown sequence has been written to disk.
In summary, the checkpoint processes writes the checkpoint information to the control files. This is so that when the DBMS starts it reads the control files and the necessary information.
Additionally, when the checkpoint operation includes the following when it needs to summarize important contents:
- SCN (System Change Number)
- The starting position of the recovery operations within the redo log files
- Log information
When the instance starts, the System Monitor or SMON background process performs a recovery if necessary, it synchonizes the Data File and the Control file in the System Change Number (SCN) so that they match and starts the database. The process rolls forward the commits and rolls back what hasn’t been committed.
It flushes the temporary segments. If necessary other processes can trigger SMON.
The process monitor carries out recovery procedures if user processes stop unintendedly. It is also repsonsible for the buffer cache presented to a user and making sure of other resources not being wasted. Example: If active tranactions gets cancelled, any locks are released and Active Process IDs are erased.
Locks applied to users get lifted, rows that haven’t been committed and as such are unlocked (in situations similar to when the database is closed) are also released.
The PMON also saves Instance and Dispatcher information to the Listener. As with the SMON, the PMON can trigger other processes if necessary.
The Recoverer Process is used in distributed database usage scenarios. It is used to rectify errors that occur during transactions through replication (by connecting to other databases).
The processes run when the DBMS is in the DB archive mode, when there’s a log switch it archives the Redo Log to configured paths (directories). Archives are transferred to standby machines if they are present (this will be shown in the Dataguard tutorial.)
MMON (Manageability Monitor Process): Handles tasks such as gathering statistics and alerts.
MMNL (Lightweight Manageabiliy Monitor Process): Assists the MMON process described above as well as following session history and lightweight manageability related tasks.
MMAN (Memory Manager Process): Manages Automatic Memory, is able to automatically allocate memory from where it isn’t needed to where it is.
RBAL (Rebalance Process): Is the process that balances the Automatic Storage Management in disks where the volume manager is in use.
CJQ0 (Job Queue Coordinator Process): Starts the users’ jobs and can be thought of as a scheduling service . (PL/SQL Statements Procedures)
QMNC(AQ Coordinator Process): Monitors the Advanced Queue service and helps Advanced Queue and Oracle Streams to function.
As previously mentioned, when a [b]query[/b] has been entered Oracle checks to see whether or not the necessary block is in the buffer cache and if it is, transfers it to the user. If the block isn’t in the cache, it reads the block from the datafile (i.e. the disk) and copies it to the buffer cache as well as to the user. This operation helps speed up the transaction process if the same query is later requested.
Buffer memory areas can have 4 different statuses:
Pinned: When more than one user is waiting to write to the same block.
Clean: Is in a ‘transit’ or unpinned available state (if there is no repeated access to available blocks).
Free or Unused: This is when the buffer area is empty or when the instance has just been created.
Dirty: When the area in buffer has been changed, committed and ready to write to disk.
This illustrates a situation similar to when there is a telephone line and a connection between 2 parties. A session begins whenever one party calls and the other party answers. (Server processes start as a result of the client processes).
1 – An instance is created; memory is allocated for use from the operating systemm, background processes are started and user access is made available.
2 – A user starts an application or sends a connection request through Oracle.
3 – The server manages the access between listener service and the instance service over the network. It takes incoming connection requests and correspondingly starts a dedicated server processes.
4 – When a user enters DML or SQL, for example changing user IDs information.
5 – Server processes takes statements and makes the shared pool the controller, if the same SQL has previously been executed. If the same SQL statements has been previously executed, it controls user privileges for the SQL and the requested information.
If the server processes are unable to find the data in the shared pool, a new memory area is reserved for the SQL, the SQL is parsed and then processed.
6 – If the requested data blocks are retrieved, if present in the buffer cache, if not from the datafiles.
7 – The Server processes change the data in the SGA.
Writes the changed Redo Logs to the logwriter.
At convenient times, the DBMS’s DBW writes the changes to the datafile permanently.
8 – If the change transaction has successfully been executed, the server process sends a message to the application over the network. If the change hasn’t successfully been executed a error message is sent.
9 – After the instance has been created, other background processes monitor all the procedures and when necessary take action. An example of this is if a Redo Log Switch occurs, the arc (Oracle archive file extension) archies the Redo Log.
The database server additionally manages transactions of the other users, such as preventing making changes to the same data at the same time.
Control Files: Stores the identification information of the database. Each instance has at least one control file, 2 control files being recommended if there is separate physical disk. By default there are 3 control files and it advisable to spread them to different physical disks.
The contents of a Control File:
• System Change Number (SCN) [very important]
• Names and locations of data files
• Names and locations of redo log files
• The sequence numbers of the current log
• Backup set information
As the control file is in binary form a user cannot directly view and change numeric data. For detailed information please read the Managing the Control File article.
DataFiles: The files where User Objects are kept. Are the physical counterparts to the logical tablespace (which contain tables & indexes). A datafile can only belong to a single tablespace, however a tablepace can have more than one datafile (apart from bigfile tablespaces).
Online RedoLog Files: Contain the transaction information. By default there are 3 online redo logs and a minimum of amount 2, whose dimensions and quantities can be increased depending on requirements. It is highly advisable to have copies in separate disks.
Online Redo Log files are operated cyclically, i.e. (when using the default of 3 files) Data is written to the 1st file, followed by the 2nd file and the 3rd file before writing to the 1st file again.
Parameter File: The parameter file, also known as the start file, is in binary format. When the database starts, it is this file that first gets read in order to decide the amount of memory to get from the operating system. It also stores the paths of the control files and retrieves the information.
A copy of the Parameter file as a text file can be made if required (using sql>cretae spfile from pfile).
BackUp Files: Oracle’s backup files are in 3 forms: export, dbf and rman files.
Oracle has 3 ways of creating backups:
1 – Export: which is a logical backup made by copying the database data and definitions.
2 – Cold Backup (a user backup): an offline type of backup that is taken after the database has been shutdown.
3 – RMAN backup: a uncompressed database backup, can be considered a clone.
Archived Redo Log Files: This is the name given to the backups of the online archive redo logs or archive log. These log files make it possible to restore a database to an earlier point in time.
Password File: While a database is offline, a password file can be used to authenticate users logging in to a database. Can be re-created if there are any problems with it.
Alert Log: This log file stores database messages such as errors. The alert log file should constanty be monitored by database administrators.
Trace files: Contains much more detail about error messages. If the DBMS has been configured to include running SQL, then trace files will include this information.
Schema = User (Objects Catalogue) (Applies to both MS SQL and MySQL databases)
Database = Created from tablespace.
Tablespace = A logical structure containing objects composed of segments.
Segment = Comprised of extents. Data segments form the total segments and table spaces that are contained within user tables, objects and similar object segments. Creates the undo segments and undo tablespaces in order to main coherent readability.
Extent = Made from data blocks that are side-by-side or contiguous.
Datablock = This is a unit of the smallest structure in Oracle. Data in Oracle is stored in datablocks.
An Oracle database is logically divided into tablespaces.
• user tbs (big / small)
• undo tbs
system tablespace: is automatically created along with a new database. It’s a necessary part of every database.
Like a data dictionary, it hosts the necessary tables for a database to run and can’t be made offline.
SysAux tablespace: The SysAux is an auxiliary tablespace, introduced with Oracle 10G, that helps the System Tablespace. This is done by containing some schema objects thereby reducing the amount of clutter on the System Tablespace.