SlideShare a Scribd company logo
Anil nair rac_internals_sangam_2016
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Oracle Real Application Clusters (Internals)
Anil Nair
Sr. Principal Product Manager,
Oracle Real Application Clusters (RAC)
Nov 12th, 2016
@Amnair, @OracleRACpm
http://www.linkedin.com/in/anil-nair-01960b6
http://www.slideshare.net/AnilNair27/
15 Years of Performance Innovations
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Safe Harbor Statement
The following is intended to outline our general product direction. It is intended for
information purposes only, and may not be incorporated into any contract. It is not a
commitment to deliver any material, code, or functionality, and should not be relied upon
in making purchasing decisions. The development, release, and timing of any features or
functionality described for Oracle’s products remains at the sole discretion of Oracle.
3
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Out-of-box support for major applications
• e.g. Oracle Apps, Siebel, SAP, Peoplesoft, etc.
• Unprecedented Scalability
• Add nodes as needed for linear scalability
• Seamless integration with other options:
• RAC + Data Guard provide site availability
• RAC + Multitenant provide availability and
scalability for consolidated environments
• RAC + IMDB provide availability
and scalability for DSS environments
• **RAC + IMDB + Reader Nodes provide even more
flexibility when it comes to scaling your workload
4
Without Application Code Changes!
#1 Choice for Scalability & Availability
✔
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Clusterware
– Cluster Domain Architecture
• Automatic Storage Management
– Flex Disk Group
– Create Database clones
• Autonomous Health Framework
5
Announcing Oracle Database 12c Release 2 on Oracle Cloud
Oracle is presenting features for Oracle Database 12c Release 2 on Oracle Cloud. We will announce availability
of the On-Prem release soon.
Oracle Real Application Cluster Family of Solutions
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Program Agenda
15 Years of Innovations
RAC Scalability & Availability Optimizations
Automated for You
1
6
2
3
A
B
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Program Agenda
15 Years of Innovations1
7
2
3
A
B
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Oracle RAC Evolution
8
Oracle 6 to
Oracle8i
OPS
(199x)
Oracle
Database 10g
RAC (2004)
Grid Computing,
Oracle Clusterware
Oracle Database
11g Rel. 1 with RAC
(2007)
Engineered
Systems
Oracle9i
RAC (2001)
RAC
debuts
Oracle Database 11g
Rel. 2 with RAC
(2009)
RAC One Node
Oracle Database 12c
Rel. 1 with RAC
(2013)
Multitenant
Oracle Confidential –
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
15 Years of Innovations
• Oracle Clusterware – Cluster Domain Architecture
•
•
•
9
Leading to Oracle 12c Release 2
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
15 Years of Innovations
• Oracle Clusterware - Cluster Domain architecture
•
•
•
10
Leading to Oracle 12c Release 2
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 11
Domain Services Cluster
Cluster Domain
ASM
IO Service
ACFS
Services
ASM
Service
Database
Member Cluster
Uses ASM
Service
Database
Member Cluster
Uses ASM IO
Service of DSC
Trace File
Analyzer
(TFA)
Service
Mgmt
Repository
(GIMR)
Service
Application
Member Cluster
GI only
Database
Member Cluster
Uses local ASM
Shared ASM
Additional
Optional
Services
Rapid Home
Provisioning
(RHP)
Service
Private
Network
SAN
Storage
ASM
Network
Storage
1 2 3 4
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
15 Years of Innovations
•
• Automatic Storage Management
•
•
12
Leading to Oracle 12c Release 2
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 13
Database Oriented Storage Management
Oracle 12c Release 2Current Organization
• Flex Diskgroups enable
File Groups
• A File Group is the set of
files belonging to
database or PDB
• A File Group’s name
defaults to the database
or PDB name
Storage Devices
Diskgroup
DB-1:File-1
DB-2:File-4
DB-3:File-9
DB-2:File-5
DB-1:File-2
DB-3:File-8
DB-3:File-7
DB-2:File-6
DB-1:File-3
Diskgroup
File-1
DB-1
File-2
File-3
FileGroup=>Database
File-4
DB-2
File-5
File-6
File-7
DB-3
File-8
File-9
Storage Devices
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Flex Diskgroup
14
1. Quota Groups provide means of
enforcing quota management
2. Modifiable redundancy at the File
Group level
3. Shadow copies of File Groups can be
created and split off
Diskgroup
Quota
Group-B
Quota
Group-A
QuotaGroup
File-1
DB-1
File-2
File-3
File-4
DB-2
File-5
File-6
File-7
DB-3
File-8
File-9
FileGroup
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
15 Years of Innovations
•
•
• Autonomous Health Framework
•
15
Leading to Oracle 12c Release 2
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Oracle 12c Release 2
Autonomous
Health
Framework
16
Confidential – Oracle
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Program Agenda
RAC Scalability & Availability Optimizations
Automated for You
1
17
2
3
A
B
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Cache Fusion
18
A brief Refresher
• Maximum 3-way communication
• Dynamic Resource Management
(DRM) attempts to optimize
down to 2-way communication by
moving the master to the
instance where the resource is
frequently accessed
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Cache Fusion
19
Automatically chooses Optimal path
• Cache fusion collects & maintains
statistics on Private Network
access time and Disk access time
• Cache fusion will use this
information to find the optimal
path Network or Disk to serve
blocks
• E.g. Flash Storage may provide
better access times to data than the
Private Network under high load.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Performance Outliers
20
Hard to find cause
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Introducing LMS CR Slaves
21
Will help mitigate Performance outliers
• In previous releases, LMS work on incoming consistent read requests in
sequential fashion
• Sessions requesting consistent blocks that require applying lot of undo
may cause LMS to be busy
• Starting with Oracle RAC 12c Release 2, LMS offloads work to ‘CR slaves’
if the amount of UNDO to be applied exceeds a certain, dynamic
threshold
• Default is 1 slave and additional slaves are spawned as needed
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• OLTP sessions require remote
undo header lookups to find
– If a transaction has committed
– Block cleanouts
• To reduce remote look ups, each instance
maintains a hash table of recent
transactions (active & committed)
• Undo Header Hash table improves
scalability by eliminating remote look ups
Introducing Undo Header Hash Table
22
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 23
Provides consistent performance after planned service failover
Optimized Singleton Workload Scaling
Service-oriented Buffer Cache Access determines the data (on database object level)
accessed by the service and masters this data on the node on which the (singleton) service
is offered, which improves data access performance.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Cache Fusion maintains a Service to
Buffer Cache relationship
– Tracks which service causes row(s)
to be read into the buffer cache
• This statistic is used to
– Master the resource only on those
nodes where the service is active
• Optimized “Resource Master” Dispersion
– Pre-Warm the cache during service
failover amid planned downtime
Service-Oriented Buffer Cache Access
24
NodeA
Oracle GI
Oracle RAC
NodeB
Oracle GI
Oracle RAC
cons_1 cons_2
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 25
Chatty PDB(s) does not affect other PDB performance
Pluggable Database and Service Isolation
Pluggable Database and Service Isolation improves performance by reducing
DLM operations for PDBs / Services not offered in all instances and optimizing
block operations based on in-memory block separation.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 26
• Single Domain (domain 0) at the
CDB level for all PDBs
• Operations like PDB start, stop,
reconfigurations etc rely on the
single Domain
• Accessing a resource from hash
table using single Domain affected
as the number of PDBs increases (>
250) Pdb-1
Pdb-2
Pdb-3
Pdb-4
Domain 0
Instance SGA
Resource Hash Table
Oracle Multitenant 12c Rel. 1 Scalability Implementation
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• With Oracle Database 12c Rel. 2, Each
PDB gets its own PDB specific domain ID
• GES & GCS resources are balanced
across PDB-specific domain ID
• Resulting in improved,
consistent performance
– One chatty PDB will not affect the
performance of any another PDB
– Reduced Hash table size as hash table is only
created on the instances where PDB is running
27
Oracle Multitenant 12c Rel. 2 Scalability Optimization
Domain1
Resource Hash Table
Resource Hash Table
Resource Hash Table
Resource Hash Table
Domain 2
Domain 3
Domain 4
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Program Agenda
Scalability & Availability Optimizations
Automated for You
Optionally tunable for your environment
New Features Not to Miss
Appendixes
Step by Step Upgrade GI from 12.1 to 12.2
12.2 Installer New Features in Action
1
28
2
3
A
B
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Statistics show that shortage of
memory and subsequent swapping
are a major cause of downtime
(Node evictions or Instance
evictions)
Configure Swap on SSD device
Ensure device is formatted as ext4
Add “discard, noatime” to fstab
Configure OS parameters
vm.swappiness=100
vm.panic_on_oom=0
vm.oom_kill_allocating_task=0
vm.overcommit_memory=0
Configure Swap on SSD Storage
29
#cat /sys/block/sdm/queue/rotational
1
#blkid /dev/sdm
/dev/sdm:UUID=“xxxx” TYPE=ext4
#cat /etc/fstab |grep sdm
UUID="xxx" swap ext4 discard 1 2
#sysctl –w vm.swappiness=100
#sysctl –w vm.panic_on_oom=0
#sysctl –w vm.oom_kill_allocating
task=0
# sysctl –w
vm.overcommit_memory=0
#sysctl –w
#sysctl –p /etc/sysctl.conf
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Detect Node/Instance Hang/Death
• Evict the dead/hung Instance/Node
• Elect a Recovery Master (RM)
– One of the surviving instance process (SMON)
will get lock and be elected RM
• RM will then
– read redo of evicted instance
– apply recovery
– signal completion
High Level Reconfiguration Stages
30
Detect
Evict
Elect
Recovery
Read
Redo
Apply
Recovery
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Recovery Buddy feature optimizes
reconfiguration
– Buddy Instances eliminate the
“Elect Recovery Master” phase
– Redo-read is optimized
via memory-reads
– Apply recovery is optimized as
switching between read and
writes is no longer required
Reduced Reconfiguration time with “Recovery Buddy”
31
Detect
Evict
Elect
Recovery
Read
Redo
Apply
Recovery
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
1. Buddy Instance mapping is simple
( 1  2, 2  3, 3  4, 4  1)
2. Recovery buddies are set
during instance startup
3. RMS0 on each recovery buddy
instance maintains an in-memory
area of redo log change
4. The in-memory area is used during
recovery therefore eliminating the
need to physically read the redo
Buddy Instances – Under the Hood
32
1. Inst1 is recovery buddy for Inst2
2. Inst2 is recovery buddy for 3 and so-on
3. Recovery buddy mapping will change as new
instances join or leave
For e.g If inst3 crashes, a new recovery
buddy will be assigned to Inst 4
Inst 1 Inst 2 Inst 3 Inst 4
Recovery
Buddy 2
Recovery
Buddy 3
Recovery
Buddy 4
MyCluster
Recovery
Buddy 1
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Overall recovery times depends on
reconfiguration time, which depends
on the number of dirty blocks in the
instance that needs recovery
• Reduce recovery time by calibrating
and setting Fast_Start_MTTR_Target:
1. Measure:
“Select
Target_mttr,Estimated_mttr
from gv$instance_recovery”
2. Set Fast_Start_MTTR_Target= <value>
under
Better Availability Means Reducing Reconfiguration Times
33
– The estimated_mttr value is the mean
time to recover should a crash occur
• Vendor Clusterware / NFS
– Instance reconfiguration depends
on Oracle Clusterware. Vendor
Clusterware only adds an additional
layer
– Oracle Homes on will be affected by
NFS hangs and can cause increased
brownouts
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Program Agenda
Scalability & Availability Optimizations
Automated for You
Optionally tunable for your environment
New Features Not to Miss
Appendixes
Step by Step Upgrade GI from 12.1 to 12.2
12.2 Installer New Features in Action
1
34
2
3
A
B
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
New Features Not to Miss
Leaf Nodes
Hang Manager
Data Guard
1
35
2
3
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
New Features Not to Miss
Leaf Nodes1
36
2
3
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Flex Cluster and Leaf nodes were
introduced with Oracle RAC 12c Rel. 1.
• Leaf Nodes are loosely coupled nodes
that are attached to Hub Nodes
• Starting with Oracle RAC 12c Rel. 2,
it is now possible to run read-only
workload on instances running on
Leaf Nodes  Reader Nodes
• A Reader Node failure does not impact
the overall database activity, making it
easy to scale to hundreds of nodes.
Flex Cluster – A Brief Review
37
Hub
Node 1
Leaf
Node 1
Leaf
Node 3
Leaf
Node 2
Hub
Node 3
Hub
Node 2
Storage
Network
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
How to run an Instance on Flex Cluster
38
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Configure Reader Nodes with
additional memory for queries
– Goal: prevent spilling to
Temp Tablespace during sorts
• Create local Temp Tablespace
to improve performance for spills
– Avoid cross-instance space
management
– Avoid CF enqueue overhead
Reader Nodes Instance – Configuration
39
R/W
Inst 1
R/W
Inst 3
R/W
Inst 2
Reader Node
Instance 1
Reader
Instance 3
Reader
Instance 2
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Create and Configure
– CREATE LOCAL TEMPORARY TABLESPACE FOR RIM
temp_rim TEMPFILE ‘/loc/temp_file_rim’ EXTENT
Management local UNIFORM SIZE 1M AUTOEXTEND
ON
– One Bigfile per Tablespace
– Alter user scott local temporary tablspace blah;
• Users can be configured
– local temporary to be used when user is connected
to Reader Node instance
– Shared Temporary to be used when user is
connected to Read Write instance
Configure a Temporary Tablespace
40
User Shared temp
Read Write
Instance
Read Only
Instance
N
Continue
SQL
Processing
User Local temp
N
DB Shared temp
N
DB Local temp
User Local temp
N
DB Local temp
N
User Shared temp
N
DB Shared temp
Session(s)
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
New Features Not to Miss
Hang Manager
1
41
2
3
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Overlooked and Underestimated – Hang Manager
• Customers experience database hangs for a variety of reasons
– High system load, workload contention, network congestion or errors
• Before Hang Manager was introduced with Oracle RAC 11.2.0.2
– Oracle required information to troubleshoot a hang - e.g.:
• System state dumps
• For RAC: global system state dumps
– Customer usually had to reproduce with additional events
42
Why is a Hang Manager required?
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Hang Manager only considers DB
sessions holding resources on
which sessions are waiting
• Hang Manager detects hangs
across layers
• Deadlocks and User Locks are not
managed by Hang Manager
43
How does it work?
Consider Cross layer
holders like ASM
instance, Leaf nodes etc
Consider QoS policies,
User Defined settings
Holder
Identified
Verify
Analyze
Evaluate
Detect
Session
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Hang Manager auto-tunes itself by
periodically collecting instance-and
cluster-wide hang statistics
• Metrics like Cluster Health/Instance
health is tracked over a moving average
• This moving Average considered during
resolution
• Holders waiting on SQL*Net break/reset
are fast tracked
Hang Manager Optimizations
44
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
• Early Warning exposed via (V$ view)
• Sensitivity can be set higher, if the user
feels the default level is too conservative.
• Hang Manager behavior can be further
fine-tuned by setting appropriate QoS
policies
DBMS_HANG_MANAGER.Sensitivity
45
Hang
Sensitivity
Level
Description Note
NORMAL Hang Manager uses its
default internal operating
parameters to try to meet
typical requirements for any
environments.
Default
HIGH Hang Manager is more alert
to sessions waiting in a chain
than when sensitivity is in
NORMAL level.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
New Features Not to Miss
Data Guard
1
46
2
3
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Data Guard Standby Redo Apply
• In a typical RAC Primary and RAC standby, Only one node of the standby
can apply redo
• Other RAC nodes of the standby instance are typically in waiting mode
even if the apply is CPU bound.
• Other instance only takes over redo apply only if the instance applying redo
crashes
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Multi-Instance Redo Apply
• Parallel, multi-instance recovery means “the standby DB will keep up”
– Standby recovery - utilizes CPU and I/O across all nodes of RAC standby
– Up to 3500MB+/sec apply rate on an 8 node RAC
• Multi-Instance Apply runs on all MOUNTED instances or all OPEN Instances
• Exposed in the Broker with the ‘ApplyInstances’ property on standby
Utilize all RAC nodes on standby to apply Redo
recover managed standby database disconnect using instances 4;
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Single-Instance Redo Apply
MRP
Processes
MRP
Processes
Standby Instance 1
RFS
Process
Coordinator
Process
MRP
Processes
RFS
Process
Coordinator
Process
MRP
Processes
RFS
Process
Primary Instance 1
ASYNC/SYNC
Process
Primary Instance 2
ASYNC/SYNC
Process
Primary Instance 3
ASYNC/SYNC
Process
Thread 1 Redo
Thread 2 Redo
Thread 3 Redo
SRL
SRL
SRL
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Multi-Instance Redo Apply
Coordinator
Process
MRP
Processes
Standby Instance 1
RFS
Process
Coordinator
Process
MRP
Processes
Standby Instance 2
RFS
Process
Coordinator
Process
MRP
Processes
Standby Instance 3
RFS
Process
Primary Instance 1
ASYNC/SYNC
Process
Primary Instance 2
ASYNC/SYNC
Process
Primary Instance 3
ASYNC/SYNC
Process
Thread 1 Redo
Thread 2 Redo
Thread 3 Redo
SRL
SRL
SRL
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Multi-Instance Redo Apply – 2 Node Standby
Coordinator
Process
MRP
Processes
Standby Instance 1
RFS
Process
Coordinator
Process
MRP
Processes
Standby Instance 2
RFS
Process
Coordinator
Process
RFS
Process
Primary Instance 1
ASYNC/SYNC
Process
Primary Instance 2
ASYNC/SYNC
Process
Primary Instance 3
ASYNC/SYNC
Process
Thread 1 Redo
Thread 2 Redo
Thread 3 Redo
SRL
SRL
SRL
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 52
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Program Agenda
1
53
2
3
A
B
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Create a new directory to install the
grid home and copy the
grid_home.zip to that directory
54
Unzip the grid_home.zip in the
newly created directory
(/u01/app/12.2.0/crs) in the example
Execute ./gridSetup.sh
(Hint: Not runInstaller)
$mkdir /u01/app/12.2.0/crs
$scp grid_home.zip
/u01/app/12.2.0/crs
1
All these steps
(Step1, Step 2,
Step 3 is to be
executed on
First Node only
2 3
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 55
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 56
Optionally choose to enable
“Automatically run
configuration scripts” and
provide root credentials in
the next screen. In this
example, we will not be
enabling this option
Ensure cluvfy checks are
taken care of. In the
example, it seems CVU is
complaining about missing
mandatory patch 21255373
Click on the “more details”
hyperlink to get additional
details about mandatory
patch 21255373
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 57
Login to MOS
and download
mandatory
patch 21255373
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Ensure you are using latest version of Opatch
58
Download the latest version of the Opatch from MOS
using Patch 6880880.
The above version was latest at the time the slides were
created
Ensure the latest version of opatch is being used
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 59
The latest Opatch is
installed by
unzipping the
p6880880* file into
the Grid Home on
ALL the nodes
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 60
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 61
Now the patch is installed, Continue the Upgrade
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 62
Execute the root scripts on First node
1.) The log file location
Ensure you Note
2.) The last node to be
downgraded cannot be
a leaf node
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 63
Ensure root.sh completes successfully on All the nodes
2.) The last node to be
downgraded cannot be
a leaf node
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 64
Go back to the window where
the installer is running to
continue with the Upgrade
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Program Agenda
12.2 Installer New Features in Action (Grid Infrastructure Only)
1
65
2
3
A
B
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
New Image-based Installation
Step Image-based (12.1) Step Image-based (12.2)
1 Download shiphome zip files 1 Download image zip files
2 Unzip grid1/2.zip to some stage location
(stage_loc)
3 Execute <stage_loc>/runInstaller.sh
4 Bootstraps files to some temp area (1GB)
5 Copies files from <stage_loc> to OH
6 Zip up the OH and store it as image files
7 Unzip image files to OH on all nodes of the
cluster
2 Unzip image files to OH on one of the nodes of cluster
8 Execute clone.pl on all nodes of cluster
9 Run config.sh from one of the nodes 3 Run gridSetup.sh from one of the nodes
66
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
New ‘Lenient’ mode
67
• Oracle 12c Rel. 2 Grid Infrastructure
installer supports ‘Lenient’ mode
installation
• Installer allows user to by-pass nodes that
are possibly mis-configured and proceed
with configuration on the remaining nodes
• It is default behavior for all interactive
installations
• Supported for silent(non-interactive)
installations when “-lenientInstallMode” is
specified on command line
67
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
Specify nodes using patterns/expressions
6868
Copyright © 2016, Oracle and/or its affiliates. All rights reserved.
new –executeConfigtools option
• –executeConfigTools is the new option to (re-)execute post-install
configuration tools
• Works for gridSetup(GI) and runInstaller(DB)
• Works with the installer’s response file
• Has interactive UI
• Better logging of configuration tools output for easier diagnostics
• configToolAllCommands is deprecated
69
Anil nair rac_internals_sangam_2016

More Related Content

Anil nair rac_internals_sangam_2016

  • 2. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Oracle Real Application Clusters (Internals) Anil Nair Sr. Principal Product Manager, Oracle Real Application Clusters (RAC) Nov 12th, 2016 @Amnair, @OracleRACpm http://www.linkedin.com/in/anil-nair-01960b6 http://www.slideshare.net/AnilNair27/ 15 Years of Performance Innovations
  • 3. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. 3
  • 4. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Out-of-box support for major applications • e.g. Oracle Apps, Siebel, SAP, Peoplesoft, etc. • Unprecedented Scalability • Add nodes as needed for linear scalability • Seamless integration with other options: • RAC + Data Guard provide site availability • RAC + Multitenant provide availability and scalability for consolidated environments • RAC + IMDB provide availability and scalability for DSS environments • **RAC + IMDB + Reader Nodes provide even more flexibility when it comes to scaling your workload 4 Without Application Code Changes! #1 Choice for Scalability & Availability ✔
  • 5. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Clusterware – Cluster Domain Architecture • Automatic Storage Management – Flex Disk Group – Create Database clones • Autonomous Health Framework 5 Announcing Oracle Database 12c Release 2 on Oracle Cloud Oracle is presenting features for Oracle Database 12c Release 2 on Oracle Cloud. We will announce availability of the On-Prem release soon. Oracle Real Application Cluster Family of Solutions
  • 6. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Program Agenda 15 Years of Innovations RAC Scalability & Availability Optimizations Automated for You 1 6 2 3 A B
  • 7. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Program Agenda 15 Years of Innovations1 7 2 3 A B
  • 8. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Oracle RAC Evolution 8 Oracle 6 to Oracle8i OPS (199x) Oracle Database 10g RAC (2004) Grid Computing, Oracle Clusterware Oracle Database 11g Rel. 1 with RAC (2007) Engineered Systems Oracle9i RAC (2001) RAC debuts Oracle Database 11g Rel. 2 with RAC (2009) RAC One Node Oracle Database 12c Rel. 1 with RAC (2013) Multitenant Oracle Confidential –
  • 9. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 15 Years of Innovations • Oracle Clusterware – Cluster Domain Architecture • • • 9 Leading to Oracle 12c Release 2
  • 10. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 15 Years of Innovations • Oracle Clusterware - Cluster Domain architecture • • • 10 Leading to Oracle 12c Release 2
  • 11. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 11 Domain Services Cluster Cluster Domain ASM IO Service ACFS Services ASM Service Database Member Cluster Uses ASM Service Database Member Cluster Uses ASM IO Service of DSC Trace File Analyzer (TFA) Service Mgmt Repository (GIMR) Service Application Member Cluster GI only Database Member Cluster Uses local ASM Shared ASM Additional Optional Services Rapid Home Provisioning (RHP) Service Private Network SAN Storage ASM Network Storage 1 2 3 4
  • 12. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 15 Years of Innovations • • Automatic Storage Management • • 12 Leading to Oracle 12c Release 2
  • 13. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 13 Database Oriented Storage Management Oracle 12c Release 2Current Organization • Flex Diskgroups enable File Groups • A File Group is the set of files belonging to database or PDB • A File Group’s name defaults to the database or PDB name Storage Devices Diskgroup DB-1:File-1 DB-2:File-4 DB-3:File-9 DB-2:File-5 DB-1:File-2 DB-3:File-8 DB-3:File-7 DB-2:File-6 DB-1:File-3 Diskgroup File-1 DB-1 File-2 File-3 FileGroup=>Database File-4 DB-2 File-5 File-6 File-7 DB-3 File-8 File-9 Storage Devices
  • 14. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Flex Diskgroup 14 1. Quota Groups provide means of enforcing quota management 2. Modifiable redundancy at the File Group level 3. Shadow copies of File Groups can be created and split off Diskgroup Quota Group-B Quota Group-A QuotaGroup File-1 DB-1 File-2 File-3 File-4 DB-2 File-5 File-6 File-7 DB-3 File-8 File-9 FileGroup
  • 15. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 15 Years of Innovations • • • Autonomous Health Framework • 15 Leading to Oracle 12c Release 2
  • 16. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Oracle 12c Release 2 Autonomous Health Framework 16 Confidential – Oracle
  • 17. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Program Agenda RAC Scalability & Availability Optimizations Automated for You 1 17 2 3 A B
  • 18. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Cache Fusion 18 A brief Refresher • Maximum 3-way communication • Dynamic Resource Management (DRM) attempts to optimize down to 2-way communication by moving the master to the instance where the resource is frequently accessed
  • 19. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Cache Fusion 19 Automatically chooses Optimal path • Cache fusion collects & maintains statistics on Private Network access time and Disk access time • Cache fusion will use this information to find the optimal path Network or Disk to serve blocks • E.g. Flash Storage may provide better access times to data than the Private Network under high load.
  • 20. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Performance Outliers 20 Hard to find cause
  • 21. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Introducing LMS CR Slaves 21 Will help mitigate Performance outliers • In previous releases, LMS work on incoming consistent read requests in sequential fashion • Sessions requesting consistent blocks that require applying lot of undo may cause LMS to be busy • Starting with Oracle RAC 12c Release 2, LMS offloads work to ‘CR slaves’ if the amount of UNDO to be applied exceeds a certain, dynamic threshold • Default is 1 slave and additional slaves are spawned as needed
  • 22. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • OLTP sessions require remote undo header lookups to find – If a transaction has committed – Block cleanouts • To reduce remote look ups, each instance maintains a hash table of recent transactions (active & committed) • Undo Header Hash table improves scalability by eliminating remote look ups Introducing Undo Header Hash Table 22
  • 23. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 23 Provides consistent performance after planned service failover Optimized Singleton Workload Scaling Service-oriented Buffer Cache Access determines the data (on database object level) accessed by the service and masters this data on the node on which the (singleton) service is offered, which improves data access performance.
  • 24. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Cache Fusion maintains a Service to Buffer Cache relationship – Tracks which service causes row(s) to be read into the buffer cache • This statistic is used to – Master the resource only on those nodes where the service is active • Optimized “Resource Master” Dispersion – Pre-Warm the cache during service failover amid planned downtime Service-Oriented Buffer Cache Access 24 NodeA Oracle GI Oracle RAC NodeB Oracle GI Oracle RAC cons_1 cons_2
  • 25. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 25 Chatty PDB(s) does not affect other PDB performance Pluggable Database and Service Isolation Pluggable Database and Service Isolation improves performance by reducing DLM operations for PDBs / Services not offered in all instances and optimizing block operations based on in-memory block separation.
  • 26. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 26 • Single Domain (domain 0) at the CDB level for all PDBs • Operations like PDB start, stop, reconfigurations etc rely on the single Domain • Accessing a resource from hash table using single Domain affected as the number of PDBs increases (> 250) Pdb-1 Pdb-2 Pdb-3 Pdb-4 Domain 0 Instance SGA Resource Hash Table Oracle Multitenant 12c Rel. 1 Scalability Implementation
  • 27. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • With Oracle Database 12c Rel. 2, Each PDB gets its own PDB specific domain ID • GES & GCS resources are balanced across PDB-specific domain ID • Resulting in improved, consistent performance – One chatty PDB will not affect the performance of any another PDB – Reduced Hash table size as hash table is only created on the instances where PDB is running 27 Oracle Multitenant 12c Rel. 2 Scalability Optimization Domain1 Resource Hash Table Resource Hash Table Resource Hash Table Resource Hash Table Domain 2 Domain 3 Domain 4
  • 28. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Program Agenda Scalability & Availability Optimizations Automated for You Optionally tunable for your environment New Features Not to Miss Appendixes Step by Step Upgrade GI from 12.1 to 12.2 12.2 Installer New Features in Action 1 28 2 3 A B
  • 29. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Statistics show that shortage of memory and subsequent swapping are a major cause of downtime (Node evictions or Instance evictions) Configure Swap on SSD device Ensure device is formatted as ext4 Add “discard, noatime” to fstab Configure OS parameters vm.swappiness=100 vm.panic_on_oom=0 vm.oom_kill_allocating_task=0 vm.overcommit_memory=0 Configure Swap on SSD Storage 29 #cat /sys/block/sdm/queue/rotational 1 #blkid /dev/sdm /dev/sdm:UUID=“xxxx” TYPE=ext4 #cat /etc/fstab |grep sdm UUID="xxx" swap ext4 discard 1 2 #sysctl –w vm.swappiness=100 #sysctl –w vm.panic_on_oom=0 #sysctl –w vm.oom_kill_allocating task=0 # sysctl –w vm.overcommit_memory=0 #sysctl –w #sysctl –p /etc/sysctl.conf
  • 30. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Detect Node/Instance Hang/Death • Evict the dead/hung Instance/Node • Elect a Recovery Master (RM) – One of the surviving instance process (SMON) will get lock and be elected RM • RM will then – read redo of evicted instance – apply recovery – signal completion High Level Reconfiguration Stages 30 Detect Evict Elect Recovery Read Redo Apply Recovery
  • 31. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Recovery Buddy feature optimizes reconfiguration – Buddy Instances eliminate the “Elect Recovery Master” phase – Redo-read is optimized via memory-reads – Apply recovery is optimized as switching between read and writes is no longer required Reduced Reconfiguration time with “Recovery Buddy” 31 Detect Evict Elect Recovery Read Redo Apply Recovery
  • 32. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 1. Buddy Instance mapping is simple ( 1  2, 2  3, 3  4, 4  1) 2. Recovery buddies are set during instance startup 3. RMS0 on each recovery buddy instance maintains an in-memory area of redo log change 4. The in-memory area is used during recovery therefore eliminating the need to physically read the redo Buddy Instances – Under the Hood 32 1. Inst1 is recovery buddy for Inst2 2. Inst2 is recovery buddy for 3 and so-on 3. Recovery buddy mapping will change as new instances join or leave For e.g If inst3 crashes, a new recovery buddy will be assigned to Inst 4 Inst 1 Inst 2 Inst 3 Inst 4 Recovery Buddy 2 Recovery Buddy 3 Recovery Buddy 4 MyCluster Recovery Buddy 1
  • 33. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Overall recovery times depends on reconfiguration time, which depends on the number of dirty blocks in the instance that needs recovery • Reduce recovery time by calibrating and setting Fast_Start_MTTR_Target: 1. Measure: “Select Target_mttr,Estimated_mttr from gv$instance_recovery” 2. Set Fast_Start_MTTR_Target= <value> under Better Availability Means Reducing Reconfiguration Times 33 – The estimated_mttr value is the mean time to recover should a crash occur • Vendor Clusterware / NFS – Instance reconfiguration depends on Oracle Clusterware. Vendor Clusterware only adds an additional layer – Oracle Homes on will be affected by NFS hangs and can cause increased brownouts
  • 34. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Program Agenda Scalability & Availability Optimizations Automated for You Optionally tunable for your environment New Features Not to Miss Appendixes Step by Step Upgrade GI from 12.1 to 12.2 12.2 Installer New Features in Action 1 34 2 3 A B
  • 35. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. New Features Not to Miss Leaf Nodes Hang Manager Data Guard 1 35 2 3
  • 36. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. New Features Not to Miss Leaf Nodes1 36 2 3
  • 37. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Flex Cluster and Leaf nodes were introduced with Oracle RAC 12c Rel. 1. • Leaf Nodes are loosely coupled nodes that are attached to Hub Nodes • Starting with Oracle RAC 12c Rel. 2, it is now possible to run read-only workload on instances running on Leaf Nodes  Reader Nodes • A Reader Node failure does not impact the overall database activity, making it easy to scale to hundreds of nodes. Flex Cluster – A Brief Review 37 Hub Node 1 Leaf Node 1 Leaf Node 3 Leaf Node 2 Hub Node 3 Hub Node 2 Storage Network
  • 38. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. How to run an Instance on Flex Cluster 38
  • 39. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Configure Reader Nodes with additional memory for queries – Goal: prevent spilling to Temp Tablespace during sorts • Create local Temp Tablespace to improve performance for spills – Avoid cross-instance space management – Avoid CF enqueue overhead Reader Nodes Instance – Configuration 39 R/W Inst 1 R/W Inst 3 R/W Inst 2 Reader Node Instance 1 Reader Instance 3 Reader Instance 2
  • 40. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Create and Configure – CREATE LOCAL TEMPORARY TABLESPACE FOR RIM temp_rim TEMPFILE ‘/loc/temp_file_rim’ EXTENT Management local UNIFORM SIZE 1M AUTOEXTEND ON – One Bigfile per Tablespace – Alter user scott local temporary tablspace blah; • Users can be configured – local temporary to be used when user is connected to Reader Node instance – Shared Temporary to be used when user is connected to Read Write instance Configure a Temporary Tablespace 40 User Shared temp Read Write Instance Read Only Instance N Continue SQL Processing User Local temp N DB Shared temp N DB Local temp User Local temp N DB Local temp N User Shared temp N DB Shared temp Session(s)
  • 41. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. New Features Not to Miss Hang Manager 1 41 2 3
  • 42. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Overlooked and Underestimated – Hang Manager • Customers experience database hangs for a variety of reasons – High system load, workload contention, network congestion or errors • Before Hang Manager was introduced with Oracle RAC 11.2.0.2 – Oracle required information to troubleshoot a hang - e.g.: • System state dumps • For RAC: global system state dumps – Customer usually had to reproduce with additional events 42 Why is a Hang Manager required?
  • 43. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Hang Manager only considers DB sessions holding resources on which sessions are waiting • Hang Manager detects hangs across layers • Deadlocks and User Locks are not managed by Hang Manager 43 How does it work? Consider Cross layer holders like ASM instance, Leaf nodes etc Consider QoS policies, User Defined settings Holder Identified Verify Analyze Evaluate Detect Session
  • 44. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Hang Manager auto-tunes itself by periodically collecting instance-and cluster-wide hang statistics • Metrics like Cluster Health/Instance health is tracked over a moving average • This moving Average considered during resolution • Holders waiting on SQL*Net break/reset are fast tracked Hang Manager Optimizations 44
  • 45. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. • Early Warning exposed via (V$ view) • Sensitivity can be set higher, if the user feels the default level is too conservative. • Hang Manager behavior can be further fine-tuned by setting appropriate QoS policies DBMS_HANG_MANAGER.Sensitivity 45 Hang Sensitivity Level Description Note NORMAL Hang Manager uses its default internal operating parameters to try to meet typical requirements for any environments. Default HIGH Hang Manager is more alert to sessions waiting in a chain than when sensitivity is in NORMAL level.
  • 46. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. New Features Not to Miss Data Guard 1 46 2 3
  • 47. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Data Guard Standby Redo Apply • In a typical RAC Primary and RAC standby, Only one node of the standby can apply redo • Other RAC nodes of the standby instance are typically in waiting mode even if the apply is CPU bound. • Other instance only takes over redo apply only if the instance applying redo crashes
  • 48. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Multi-Instance Redo Apply • Parallel, multi-instance recovery means “the standby DB will keep up” – Standby recovery - utilizes CPU and I/O across all nodes of RAC standby – Up to 3500MB+/sec apply rate on an 8 node RAC • Multi-Instance Apply runs on all MOUNTED instances or all OPEN Instances • Exposed in the Broker with the ‘ApplyInstances’ property on standby Utilize all RAC nodes on standby to apply Redo recover managed standby database disconnect using instances 4;
  • 49. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Single-Instance Redo Apply MRP Processes MRP Processes Standby Instance 1 RFS Process Coordinator Process MRP Processes RFS Process Coordinator Process MRP Processes RFS Process Primary Instance 1 ASYNC/SYNC Process Primary Instance 2 ASYNC/SYNC Process Primary Instance 3 ASYNC/SYNC Process Thread 1 Redo Thread 2 Redo Thread 3 Redo SRL SRL SRL
  • 50. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Multi-Instance Redo Apply Coordinator Process MRP Processes Standby Instance 1 RFS Process Coordinator Process MRP Processes Standby Instance 2 RFS Process Coordinator Process MRP Processes Standby Instance 3 RFS Process Primary Instance 1 ASYNC/SYNC Process Primary Instance 2 ASYNC/SYNC Process Primary Instance 3 ASYNC/SYNC Process Thread 1 Redo Thread 2 Redo Thread 3 Redo SRL SRL SRL
  • 51. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Multi-Instance Redo Apply – 2 Node Standby Coordinator Process MRP Processes Standby Instance 1 RFS Process Coordinator Process MRP Processes Standby Instance 2 RFS Process Coordinator Process RFS Process Primary Instance 1 ASYNC/SYNC Process Primary Instance 2 ASYNC/SYNC Process Primary Instance 3 ASYNC/SYNC Process Thread 1 Redo Thread 2 Redo Thread 3 Redo SRL SRL SRL
  • 52. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 52
  • 53. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Program Agenda 1 53 2 3 A B
  • 54. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Create a new directory to install the grid home and copy the grid_home.zip to that directory 54 Unzip the grid_home.zip in the newly created directory (/u01/app/12.2.0/crs) in the example Execute ./gridSetup.sh (Hint: Not runInstaller) $mkdir /u01/app/12.2.0/crs $scp grid_home.zip /u01/app/12.2.0/crs 1 All these steps (Step1, Step 2, Step 3 is to be executed on First Node only 2 3
  • 55. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 55
  • 56. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 56 Optionally choose to enable “Automatically run configuration scripts” and provide root credentials in the next screen. In this example, we will not be enabling this option Ensure cluvfy checks are taken care of. In the example, it seems CVU is complaining about missing mandatory patch 21255373 Click on the “more details” hyperlink to get additional details about mandatory patch 21255373
  • 57. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 57 Login to MOS and download mandatory patch 21255373
  • 58. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Ensure you are using latest version of Opatch 58 Download the latest version of the Opatch from MOS using Patch 6880880. The above version was latest at the time the slides were created Ensure the latest version of opatch is being used
  • 59. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 59 The latest Opatch is installed by unzipping the p6880880* file into the Grid Home on ALL the nodes
  • 60. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 60
  • 61. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 61 Now the patch is installed, Continue the Upgrade
  • 62. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 62 Execute the root scripts on First node 1.) The log file location Ensure you Note 2.) The last node to be downgraded cannot be a leaf node
  • 63. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 63 Ensure root.sh completes successfully on All the nodes 2.) The last node to be downgraded cannot be a leaf node
  • 64. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. 64 Go back to the window where the installer is running to continue with the Upgrade
  • 65. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Program Agenda 12.2 Installer New Features in Action (Grid Infrastructure Only) 1 65 2 3 A B
  • 66. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. New Image-based Installation Step Image-based (12.1) Step Image-based (12.2) 1 Download shiphome zip files 1 Download image zip files 2 Unzip grid1/2.zip to some stage location (stage_loc) 3 Execute <stage_loc>/runInstaller.sh 4 Bootstraps files to some temp area (1GB) 5 Copies files from <stage_loc> to OH 6 Zip up the OH and store it as image files 7 Unzip image files to OH on all nodes of the cluster 2 Unzip image files to OH on one of the nodes of cluster 8 Execute clone.pl on all nodes of cluster 9 Run config.sh from one of the nodes 3 Run gridSetup.sh from one of the nodes 66
  • 67. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. New ‘Lenient’ mode 67 • Oracle 12c Rel. 2 Grid Infrastructure installer supports ‘Lenient’ mode installation • Installer allows user to by-pass nodes that are possibly mis-configured and proceed with configuration on the remaining nodes • It is default behavior for all interactive installations • Supported for silent(non-interactive) installations when “-lenientInstallMode” is specified on command line 67
  • 68. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. Specify nodes using patterns/expressions 6868
  • 69. Copyright © 2016, Oracle and/or its affiliates. All rights reserved. new –executeConfigtools option • –executeConfigTools is the new option to (re-)execute post-install configuration tools • Works for gridSetup(GI) and runInstaller(DB) • Works with the installer’s response file • Has interactive UI • Better logging of configuration tools output for easier diagnostics • configToolAllCommands is deprecated 69

Editor's Notes

  1. Reminder for
  2. [title, bullets under coming soon]
  3. Let’s review the history and evolution of RAC. When dealing with customers, it’s important to stress that RAC is a tested, proven and mature technology. Oracle announced Oracle RAC in 2001, so the product is now 13 years old, and quite mature. But the evolution of RAC started long before 2001. It started as far back as Oracle 6, with a product called Oracle Parallel Server, or OPS. OPS was Oracle’s first parallel database. But, with OPS, only certain types of workload would scale across multiple servers. In June 2001, Oracle introduced Oracle RAC as a priced database option to Oracle9i. Oracle RAC introduced an improved method of sharing data across the database instances in the cluster. Oracle RAC is able to pass the block across the network to the requesting instance. Since the network is orders of magnitude faster than flushing the block to disk and reading into memory the way OPS handled it, Oracle RAC is able to scale without rewriting the applications. This was a breakthrough. However, Oracle RAC was a challenge to install and support. It relied on various 3rd-party components such as a clusterware component, and a shared storage manager. Integrating Oracle RAC with these 3rd-party components was a pain point for customers in the early years of RAC adoption. Oracle10g was introduced in January 2004. Oracle made some great enhancements to Oracle RAC. To address the complexity, Oracle began providing it’s own clusterware, Oracle Clusterware, and storage manager, Oracle Automatic Storage Management or ASM. Now, Oracle provided the entire stack for the RAC deployment, putting an end to the complexity of integrating and supporting 3rd-party components into the cluster. With ASM, storage became an elastic pool, making it simple to share disk IO and storage capacity across databases, and to add or remove storage while the database was online. Oracle10g also improved upon the flexibility of the database, introducing database services as a way to manage multiple workloads running in a shared cluster. Now, resources could be aligned with workload requirements, giving customers greater control over resources to simplify meeting their quality of service objectives—even as demand changed over time. With Oracle Database 11g, Oracle made RAC even easier to deploy and manage. This was done in two ways. Perhaps most importantly, Oracle introduced Oracle Engineered Systems as a deployment platform for Oracle RAC. Oracle Exadata, first introduced on Oracle Database 11g, is fundamentally a pre-packaged Oracle RAC cluster. Now, customers could order a single product, and get a fully functional highly available and scalable RAC cluster deployed within days of receiving their hardware. Later, we introduced the Oracle Database Appliance, which introduced Engineered Systems to a new class of smaller customers, letting them too deploy highly available RAC databases without the deep knowledge previously required. In Oracle Database 11g Release 2, we introduced many manageability features, including a streamlined deployment using the Oracle Universal Installer, and improved patching. All this flexibility made RAC an ideal platform for database consolidation. Customers began consolidating multiple databases within a RAC cluster, to enable pooling and sharing of resources across all their databases. Also, in contrast to VM style consolidation, cluster consolidation reduced the number of operating system instances requiring management. This reduced the workload of systems, network, and storage administrators Our newest release, Oracle Database 12c, establishes Oracle RAC as the premier infrastructure for building private database cloud. We’ll be covering more of the details of what’s in the 12c release for RAC in another Guided Learning Path module. But, as you’ve already heard, headlining the 12c release is the new Oracle multitenant architecture, an architecture that allows customers to consolidate databases without changes or namespace collisions. Customers can now benefit from managing many databases as one, yet still get the isolation they require between databases. And…, the release provides breakthroughs in high availability with Application Continuity. Application Continuity masks database failures from the applications themselves, eliminating the disruption caused by failures at the database tier, and ensuring end-users can continue to work uninterrupted. So over its 13 year history, RAC has shown considerable evolution, and maturation and the development teams are not sitting still –a lot of great enhancements and improvements are already planned and being incorporated into upcoming Oracle Database releases
  4. With respect to storage management, ASM has until now been Diskgroup-centric. You create a Diskgroup and store all your database files there. In Oracle 12.2 ASM introduces a new style of management called Database Oriented Storage Management. Database Oriented Storage Management is made available with a new type of Diskgroup called the Flex Diskgroup. As before, we have External, Normal, and High Redundancy Diskgroups, and now we have a new type of Diskgroup called the Flex Diskgroup. Flex Diskgroups enable a new concept of a File Group which is a logical container of files in a Diskgroup that belong to an individual database or PDB. The File Group’s name is usually the database or PDB name. File Groups allow operations to be targeted against all the files belonging to a database collectively. File Groups do not change the way ASM stripes files across the ASM Disks as before. The provide new management efficiency and a set of new capabilities.
  5. A key feature Flex Diskgroups and File Groups is quota management. Quota management is provided with Quota Groups. Quota Groups are a way of specifying the amount of storage that a databases in a File Group can consume. You can have one File Group in a Quota Group or some number of File Groups in a single Quota Group. No longer will you need to put databases in their own Diskgroup to constrain the amount of storage space they consume. Flex Diskgroups changes the way redundancy is handled. Rather than have a Diskgroup-wide redundancy setting that is fixed when the Diskgroup is created, redundancy for Flex Diskgroups is specified at the File Group level and is modifiable. Furthermore, for write once files such as archive logs and backups, parity protection similar to RAID 5 and 6 is available. And lastly, with Flex Diskgroups, the redundancy of a File Group can be used to split off a copy of the File Group as a Shadow Copy of the File Group. When the Shadow Copy is split off, a new full instantiated File Group representing a copy of the database or PDB is created. This is perhaps one of the most requested capabilities for ASM and is available in Oracle 12c Release 2.
  6. Domain id is a seriamechanism