Oracle data guard how does it work




















The following entries were used during this setup. If you are planning to use an active duplicate to create the standby database, then this step is unnecessary. For a backup-based duplicate, or a manual restore, take a backup of the primary database.

Create a controlfile for the standby database by issuing the following command on the primary database. I'm making a replica of the original server, so in my case I only had to amend the following parameters.

Notice, the backups were copied across to the standby server as part of the FRA copy. If your backups are not held within the FRA, you must make sure you copy them to the standby server and make them available from the same path as used on the primary server.

Create online redo logs for the standby. It's a good idea to match the configuration of the primary server. In addition to the online redo logs, you should create standby redo logs on both the standby and the primary database in case of switchovers.

The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In my case, the following standby redo logs must be created on both servers. When using active duplicate, the standby server requires static listener configuration in a "listener.

In this case I used the following configuration. To make sure the primary database is configured for switchover, we must create the standby redo logs on the primary server. Start the auxillary instance on the standby server by starting it using the temporary "init. One more thing that happened to me is that sometimes MRP0 just got stuck in the middle of a log. In this case it waited for the entire redo file to be copied and then it continued.

I also have an SR on this, I will update this post when I have more information about it. If you read carefully and followed the logic, you probably realized that we had a constant lag issue. The fact that copying an entire redo log takes over 1 hour, while during this time we create more archives led to a really huge lag, few hours during peaks, and we wanted to reduce this lag. The problem is that in peak time we created more redo than we could copy.

Our solution was to reduce the size of the redo logs to 2GB. That way, even if we create a few GB in a short while, we can copy them in parallel, allowing shorter delay until MRP0 can start applying.

We still have lag during this time, of course, but we minimize the lag because we allow MRP to start applying faster after 2GB instead of 10GB while we keep copying more archives in parallel. This is not a perfect script, but I use it to measure to transfer rate. Your email address will not be published. Notify me of follow-up comments by email. You invoke the failover operation on the standby database that you want to failover to the primary role. You can also enable fast-start failover, which allows Data Guard to automatically and quickly failover to a previously chosen synchronized standby database.

Databases that are disabled after a role transition are not removed from the broker configuration, but they are disabled in the sense that the databases are no longer managed by the broker. To reenable broker management of these databases, you must reinstate or re-create the databases.

The Oracle Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. After creating the Data Guard configuration, the broker monitors the activity, health, and availability of all systems in the configuration. You can perform most of the activities that are required to manage and monitor the databases in the configuration from the DGMGRL prompt or in scripts. If you do not create a Data Guard broker configuration, you can manage your standby databases by using SQL commands.

Oracle Data Guard leverages the existing database redo-generation architecture to keep the standby databases in the configuration synchronized with the primary database. By using the existing architecture, Oracle Data Guard minimizes its impact on the primary database. Oracle Data Guard uses several processes to achieve the automation that is necessary for disaster recovery and high availability. Some of these processes support Oracle Database in general, and other processes are specific to a Data Guard environment.

LGWR collects transaction redo information and updates the online redo logs. For asynchronous ASYNC standby databases, independent redo transport slave processes TTnn read the redo from either the redo log buffer in memory or the online redo log file, and then ship the redo to its standby database. Other than starting the asynchronous TTnn processes, LGWR has no interaction with any asynchronous standby destination.

The ARCn process creates a copy of the online redo log files locally for use in a primary database recovery operation. ARCn is also responsible for shipping redo data to an RFS process at a standby database and for proactively detecting and resolving gaps on all standby databases.

There can be 30 archiver processes. The default value is four. RFS receives redo information from the primary database and can write the redo into standby redo logs or directly to archived redo logs. You can perform most of the activities required to manage and monitor the databases in the configuration using DGMGRL. In some situations, a business cannot afford to lose data. In other situations, the availability of the database may be more important than the loss of data.

Some applications require maximum database performance and can tolerate some small amount of data loss. The following descriptions summarize the three distinct modes of data protection. Maximum protection This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover each transaction must be written to both the local online redo log and to the standby redo log on at least one standby database before the transaction commits.

To ensure data loss cannot occur, the primary database shuts down if a fault prevents it from writing its redo stream to the standby redo log of at least one transactionally consistent standby database. Maximum availability This protection mode provides the highest level of data protection that is possible without compromising the availability of the primary database. Like maximum protection mode, a transaction will not commit until the redo needed to recover that transaction is written to the local online redo log and to the standby redo log of at least one transactionally consistent standby database.

Unlike maximum protection mode, the primary database does not shut down if a fault prevents it from writing its redo stream to a remote standby redo log. Instead, the primary database operates in maximum performance mode until the fault is corrected, and all gaps in redo log files are resolved.

When all gaps are resolved, the primary database automatically resumes operating in maximum availability mode. This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database. Maximum performance This protection mode the default provides the highest level of data protection that is possible without affecting the performance of the primary database.

This is accomplished by allowing a transaction to commit as soon as the redo data needed to recover that transaction is written to the local online redo log. The primary database's redo data stream is also written to at least one standby database, but that redo stream is written asynchronously with respect to the transactions that create the redo data.

When network links with sufficient bandwidth are used, this mode provides a level of data protection that approaches that of maximum availability mode with minimal impact on primary database performance. The maximum protection and maximum availability modes require that standby redo log files are configured on at least one standby database in the configuration.

See Section 5. Oracle Database provides several unique technologies that complement Data Guard to help keep business critical systems running with greater levels of availability and data protection than when using any one solution by itself. The following list summarizes some Oracle high-availability technologies:. RAC enables multiple independent servers that are linked by an interconnect to share access to an Oracle database, providing high availability, scalability, and redundancy during failures.

RAC and Data Guard together provide the benefits of both system-level, site-level, and data-level protection, resulting in high levels of availability and disaster recovery without loss of data:.

RAC addresses system failures by providing rapid and automatic recovery from failures, such as node failures and instance crashes. It also provides increased scalability for applications. Data Guard addresses site failures and data protection through transactionally consistent primary and standby databases that do not share disks, enabling recovery from site disasters and data corruption.

Many different architectures using RAC and Data Guard are possible depending on the use of local and remote sites and the use of nodes and a combination of logical and physical standby databases. The Flashback Database feature provides fast recovery from logical data corruption and user errors. By allowing you to flash back in time, previous versions of business information that might have been erroneously changed or deleted can be accessed once again. This feature:. Eliminates the need to restore a backup and roll forward changes up to the time of the error or corruption.

Instead, Flashback Database can roll back an Oracle database to a previous point-in-time, without restoring datafiles. Provides an alternative to delaying the application of redo to protect against user errors or logical corruptions.

Therefore, standby databases can be more closely synchronized with the primary database, thus reducing failover and switchover times. Avoids the need to completely re-create the original primary database after a failover. The failed primary database can be flashed back to a point in time before the failover and converted to be a standby database for the new primary database.



0コメント

  • 1000 / 1000