Wednesday, 22 February 2017

Policy-Managed Oracle RAC One Node Databases

Oracle RAC One Node, introduced in Oracle Database 11g Release 2 (11.2.0.2), is a single instance of an Oracle RAC-enabled database running on one node in a cluster. It lets you consolidate multiple databases onto a single cluster while providing high availability benefits of failover protection to the single instance databases.

Oracle RAC One Node databases can be configured to be administrator-managed or policy-managed.

Administrator-managed Oracle RAC One Node Database: The database instance executes on one of the servers from the specified list of candidate nodes. Candidate servers reside in the Generic server pool and as long as at least one server from the candidate list is available, database availability is ensured.
Policy-managed Oracle RAC One Node Database: The database instance executes on one of the members of the server pool(s) associated with the database. To ensure availability of the database, server pools need to be configured such that a server will be available for the database to fail over to in case its current node becomes unavailable.

In this article, I will discuss various options available to configure server pools for policy-managed RAC One Node databases.

Environment:
  • Oracle Clusterware version: 12.1.0.2
  • Oracle database version: 12.1.0.2
  • Type of cluster: Flex
  • Hub nodes: host01, host02, host03
  • RAC One Node policy managed database: r1pmdb
Server Pool Configuration for RAC One Node Databases

Server pools for RAC One Node databases can be configured as one of the following:
  • Sever pool of size 1
  • Server pool of size > 1
  • Server pool associated with a category (Oracle Clusterware 12c onwards)
Let’s discuss these possibilities one by one.

Server pool of size 1

Server pool of size 1 can be used for RAC One Node databases by setting the minimum as well as maximum size of the server pool to 1 so that there will be only one server in the server pool. To ensure availability of the database, the IMPORTANCE of the server pool should be set higher than all other server pools in the cluster, so that, if the only server in the pool fails, a new server from the free server pool or another server pool is relocated into the server pool, as required.

Demonstration
  • Create a server pool r1pool (to host RAC One Node database) of size 1:
[oracle@host02 root]$ srvctl add srvpool -serverpool r1pool -min 1 -max 1

[oracle@host02 root]$ srvctl config srvpool -serverpool r1pool
Server pool name: r1pool
Importance: 0, Min: 1, Max: 1
Category: hub
Candidate server names:
  • Create another server pool, pool1, of size 1:
[oracle@host02 root]$ srvctl add srvpool -serverpool pool1 -min 1 -max 1

[oracle@host02 root]$ srvctl config srvpool -serverpool pool1
Server pool name: pool1
Importance: 0, Min: 1, Max: 1
Category: hub
Candidate server names:
  • Currently, one server has been assigned to each of the Free, pool1 and r1pool server pools.
[root@host02 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=host01

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=host03

NAME=ora.r1pool
ACTIVE_SERVERS=host02
  • Using DBCA, create a policy managed RAC One Node database r1pmdb assigned to server pool r1pool:
[oracle@host02 root]$ srvctl config database -d r1pmdb

Database unique name: r1pmdb
Database name: r1pmdb
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/R1PMDB/PARAMETERFILE/spfile.282.929472875
Password file: +DATA/R1PMDB/PASSWORD/pwdr1pmdb.286.929472177
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: r1pool
Disk Groups: DATA
Mount point paths:
Services: sr1pmdb
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: r1pmdb
Candidate servers:
OSDBA group: dba
OSOPER group: oper
Database instances:
Database is policy managed
  • We can see that the database r1pmdb is currently executing on host02, the only member of server pool r1pool:
[root@host02 ~]# srvctl status database -d r1pmdb

Instance r1pmdb_1 is running on node host02
Online relocation: INACTIVE
  • As we stop CRS on host02 where r1pmdb is currently running, host02 ceases to be a part of the cluster and host01, which is free server, moves to r1pool so that database r1pmdb now runs on host01 – the newly-assigned member of sever pool r1pool.
[root@host02 ~]# crsctl stop crs –f

[root@host01 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=host03

NAME=ora.r1pool
ACTIVE_SERVERS=host01

[root@host01 ~]# srvctl status database -d r1pmdb

Instance r1pmdb_1 is running on node host01
Online relocation: INACTIVE
  • Let’s stop CRS on host01 where database r1pmdb is currently running. It can be seen that host03, the only surviving server in the cluster, which is member of server pool pool1, does not move to server pool r1pool because IMPORTANCE of r1pool is not higher than that of pool1. As a result, r1pool becomes empty and database r1pmdb is no longer running on any of the nodes.
[root@host02 ~]# crsctl stop crs –f

[root@host03 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=host03
NAME=ora.r1pool
ACTIVE_SERVERS=

[root@host02 ~]# srvctl config srvpool -serverpool r1pool
Server pool name: r1pool
Importance: 0, Min: 1, Max: 1
Category: hub
Candidate server names:

[oracle@host02 root]$ srvctl config srvpool -serverpool pool1

Server pool name: pool1
Importance: 0, Min: 1, Max: 1
Category: hub
Candidate server names:

[root@host03 ~]# srvctl status database -d r1pmdb
Database is not running.
Online relocation: INACTIVE
  • Let’s increase IMPORTANCE of r1pool to 10. As a result, host03 moves from pool1 to r1pool and database r1pmdb starts executing on host03:
[oracle@host03 ~]$ srvctl modify srvpool -serverpool r1pool -importance 10

[oracle@host03 ~]$ srvctl config srvpool -serverpool r1pool
Server pool name: r1pool
Importance: 10, Min: 1, Max: 1
Category: hub
Candidate server names:

[root@host03 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=

NAME=ora.r1pool
ACTIVE_SERVERS=host03

[root@host03 ~]# srvctl status database -d r1pmdb

Instance r1pmdb_1 is running on node host03
Online relocation: INACTIVE

Thus, if a server pool of size 1 is used to host a RAC One node database, the IMPORTANCE of the server pool should be set higher than all other server pools in the cluster to ensure availability of the database.

Server pool of size > 1

We can also configure a server pool to have size > 1 for a RAC one Node database. In this case, the database will execute on any one node in the server pool. If that node leaves the cluster, the database will automatically failover to another node in the server pool. When the pool falls below its minimum size, servers from Free or other server pools in the cluster may move into the deficient pool depending upon the configuration. If it is desired that RAC one Node database runs only on certain nodes in the cluster, the associated server pool can be configured accordingly. As long as there is at least one server in the server pool, RAC one node database will be available.

Demonstration
  • Let’s increase the size of server pool r1pool to 2 and specify host02 and host03 as its members so that the database r1pmdb will execute only on host02 and host03:
[oracle@host03 ~]$ srvctl modify srvpool -serverpool r1pool -min 1 -max 2 -category ""

[oracle@host03 ~]$ srvctl modify srvpool -serverpool r1pool -servers "host02,host03"

[oracle@host03 ~]$ srvctl config srvpool -serverpool r1poolServer pool name: r1pool
Importance: 10, Min: 1, Max: 2
Category:
Candidate server names: host02,host03
  • On starting CRS on host01 and host02, host02 becomes a member of r1pool whereas host01 is assigned to pool1.
[root@host01 ~]# crsctl start crs
[root@host02 ~]# crsctl start crs
[root@host01 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=host01
NAME=ora.r1pool
ACTIVE_SERVERS=host02 host03
  • If we now stop CRS on host03 where the database is currently executing, only one server, i.e. host02, is left in r1pool. The database automatically relocates to host02 – the only remaining server in r1pool.
[root@host01 ~]# srvctl status database -d r1pmdb
Instance r1pmdb_1 is running on node host03

Online relocation: INACTIVE
[root@host03 ~]# crsctl stop crs -f

[root@host01 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=host01

NAME=ora.r1pool
ACTIVE_SERVERS=host02

[root@host01 ~]# srvctl status database -d r1pmdb
Instance r1pmdb_1 is running on node host02
Online relocation: INACTIVE
  • If we stop CRS on host02 now, r1pool becomes empty as r1pool can have only two servers, i.e. host02 and host03, and both of these are no longer part of the cluster. As a result, database r1pmdb is not running in the cluster.
[root@host02 ~]# crsctl stop crs -f

[root@host01 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=host01

NAME=ora.r1pool
ACTIVE_SERVERS=

[root@host01 ~]# srvctl status database -d r1pmdb

Database is not running.
Online relocation: INACTIVE

Thus, to restrict the execution of RAC One Node database on specific servers in the cluster, we can specify which servers should be part of the associated server pool by name. However, it would be more appropriate to assign servers based on their properties like memory, CPU count, etc. rather than their names.

Server pool associated with a category (Oracle clusterware 12c onwards)

Oracle Grid Infrastructure 12c enhances the use of server pools by introducing server attributes such as MEMORY and CPU_COUNT, which are associated with each server. A new Clusterware object defines server categories which enable you to organize servers into particular categories based on their attributes. We can configure server pool to be associated with a server category, so that server pools are defined based on server attributes rather than the sole number or name of a server that will be part of the pool. In this case, the database will execute on any one node in the server pool. If that node leaves the cluster, the database will automatically failover to another node in the server pool. When the pool falls below its minimum size, servers of the same category from Free or other server pools in the cluster may move into the deficient pool depending upon the configuration. As long as there is at least one server in the server pool, RAC one node database will be available.

Demonstration
  • Let’s look at the attributes of various servers in the cluster. It can be seen that host01 and host03 have memory = 3063mb whereas host02 has memory = 2502mb.
[root@host01 ~]# crsctl stat server host01 host02 host03 -f
NAME=host01
MEMORY_SIZE=3063
CPU_COUNT=1
CPU_CLOCK_RATE=3292
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=ora.pool1
STATE_DETAILS=
ACTIVE_CSS_ROLE=hub

NAME=host02
MEMORY_SIZE=2502
CPU_COUNT=1
CPU_CLOCK_RATE=3292
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=ora.r1pool
STATE_DETAILS=AUTOSTARTING RESOURCES
ACTIVE_CSS_ROLE=hub

NAME=host03
MEMORY_SIZE=3063
CPU_COUNT=1
CPU_CLOCK_RATE=3292
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=ora.r1pool
STATE_DETAILS=AUTOSTARTING RESOURCES
ACTIVE_CSS_ROLE=hub
  • We will create two categories of server:
    • large (MEMORY_SIZE >= 3000mb)
    • small (MEMORY_SIZE < 3000mb)
We can see that host01 and host03 belong to the large category whereas host02 belongs to the small category.

[root@host01 ~]# crsctl add category large -attr "EXPRESSION='(MEMORY_SIZE > 3000) OR(MEMORY_SIZE = 3000 )'"

[root@host01 ~]# crsctl add category small -attr "EXPRESSION='(MEMORY_SIZE < 3000)'"

[root@host01 ~]# crsctl stat server -category large
NAME=host01
STATE=ONLINE

NAME=host03
STATE=ONLINE

[root@host01 ~]# crsctl stat server -category small
NAME=host02
STATE=ONLINE
  • Let’s resize server pool r1pool to 1 and associate the large category with both r1pool and pool1:
[oracle@host03 root]$ srvctl modify srvpool -serverpool r1pool -servers ""
[oracle@host03 root]$ srvctl modify srvpool -serverpool r1pool -category large -min 1 -max 1

[oracle@host03 root]$ srvctl modify srvpool -serverpool pool1 -category large
  • As a result, both servers belonging to the large category, host01 and host03, get assigned to pool1 and r1pool respectively, whereas the small server host02 goes to the free pool. Subsequently, database r1pmdb starts running on host03 which is member of r1pool.
[root@host01 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=host02

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=host01

NAME=ora.r1pool
ACTIVE_SERVERS=host03

[root@host01 ~]# srvctl status database -d r1pmdb
Instance r1pmdb_2 is running on node host03
Online relocation: INACTIVE

If we stop CRS on host03 where the database is currently running, the server host01 from pool1 moves to r1pool because they both belong to the large category and r1pool has higher IMPORTANCE than pool1. This causes database r1pmdb to relocate to host01.

root@host03 ~]# crsctl stop crs –f

[root@host01 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=host02

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=

NAME=ora.r1pool
ACTIVE_SERVERS=host01

[root@host01 ~]# srvctl status database -d r1pmdb
Instance r1pmdb_2 is running on node host01
Online relocation: INACTIVE
  • Now, if we stop CRS on host01 where the database is currently running, although server host02 is free, it does not move to r1pool because it belongs to the small category. As a result, server pool r1pool becomes empty and the database r1pmdb stops running.
[root@host01 ~]# crsctl stop crs -f

[root@host02 ~]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=host02

NAME=Generic
ACTIVE_SERVERS=

NAME=ora.pool1
ACTIVE_SERVERS=

NAME=ora.r1pool
ACTIVE_SERVERS=

[root@host02 ~]# srvctl status database -d r1pmdb
Database is not running.
Online relocation: INACTIVE

This demonstrates how a category can be associated with the server pool hosting a RAC one Node database. If the server pool associated with a category loses a server, a server from the free pool or another pool with move to it only if the server is of same category, i.e. RAC One Node database will run only on the servers with the correct attributes.