03 July 2014

ERROR: failed to establish dependency between database and diskgroup resource

Issue:
ERROR: failed to establish dependency between database and diskgroup resource

has found in alert log

Explanation:
This means your database is not configured in clusterware. If you run command "crsctl stat res -t" you will not probably find it in the list.

Solution:
Register your database in clusterware like:
srvctl add database -d -o -p -a "," -r -s
Example:
srvctl add database -d standby -o /u01/app/oracle/product/11.2.0/db_1 -p /u01/app/oracle/product/11.2.0/db_1/dbs/spfilestandby.ora -a "DATA,FRA" -r PHYSICAL_STANDBY -s mount

13 May 2014

Oracle Heterogeneous service setup on Linux

on linux
1. install ODBS drivers
yum install unixODBC
yum install mysql-connector-odbc

2. check if driver is in /etc/odbcinst.ini
# Driver from the mysql-connector-odbc package
# Setup from the unixODBC-libs package
[MySQL]
Description     = ODBC for MySQL
Driver          = /usr/lib/libmyodbc3_r.so
Setup           = /usr/lib/libodbcmyS.so
Driver64        = /usr/lib64/libmyodbc3_r.so
Setup64         = /usr/lib64/libodbcmyS.so
FileUsage       = 1

3. add entry into /etc/odbc.ini
[MySQL]
Description     = MySQL test database
Trace       = Off
TraceFile   = stderr
Driver      = MySQL
SERVER      = 192.168.x.x
USER        = root
PASSWORD    = *******
PORT        = 3306
DATABASE    = test

4. test ODBC connection
$ isql -v MySQL

5. configure Oracle heterogeneous service
$ cp /u01/app/oracle/product/11.2.0/db_1/hs/admin/initdg4odbc.ora /u01/app/oracle/product/11.2.0/db_1/hs/admin/initMySQL.ora

6. edit /u01/app/oracle/product/11.2.0/db_1/hs/admin/initMySQL.ora
$ cat ../../hs/admin/initMySQL.ora | grep -v ^"#"

HS_FDS_CONNECT_INFO = MySQL
HS_FDS_TRACE_LEVEL = OFF
HS_FDS_SHAREABLE_NAME = /usr/lib/libodbc.so
HS_LANGUAGE=american_america.we8mswin1252
set ODBCINI=/etc/odbc.ini

7. configure listener
/u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
SID_LIST_LISTENER=
...
      (SID_DESC=
         (SID_NAME=MySQL)
         (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1)
         (PROGRAM=dg4odbc)
      )
...
  )
 ...
restart listener

8. add entry into /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora
...
mysql  =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))
    (CONNECT_DATA=(SID=MySQL))
    (HS=OK)
  )
...

9. connect into DB and create database link

SQL> create public database link MY_MYSQL connect to "root" identified by "*******" using 'mysql';

10. test select
SQL> select * from table1@my_mysql;

23 April 2014

Oracle Result Cache

Oracle Result Cache

Available only in Enterprise Edition

Documentation quote:
The result cache stores the results of SQL queries and PL/SQL functions in an area called Result Cache Memory in the shared pool. When these queries and functions are executed repeatedly, the results are retrieved directly from the cache memory. This results in a faster response time. The cached results stored become invalid when data in the dependent database objects is modified. The use of the result cache is database-wide decision. Result cache itself is instance specific and can be sized differently on different instances. To disable the result cache in a cluster, you must explicitly set the RESULT_CACHE_MAX_SIZE initialization parameter to 0 during every instance startup.

How to enable:
check the settings:
SQL> show parameter result_cache
NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------
client_result_cache_lag              big integer 3000
client_result_cache_size             big integer 0
result_cache_max_result              integer     10
result_cache_max_size                big integer 2G
result_cache_mode                    string      AUTO
result_cache_remote_expiration       integer     0

AUTO is meaningless (you can find this setting in SE as default)

RESULT_CACHE_MODE
Property Description
Parameter type String
Syntax RESULT_CACHE_MODE = { MANUAL | FORCE }
Default value MANUAL
Modifiable ALTER SESSION, ALTER SYSTEM
Basic No

SQL> alter system set result_cache_mode=FORCE scope=memory;

SQL> alter system set result_cache_max_size=2G scope=memory;
--> Note: Result Cache is a part of SGA.

How to monitor:
SQL> select DBMS_RESULT_CACHE.status from dual;
STATUS
----------------------------
ENABLED

Generate report:
SQL> set serveroutput on
SQL> execute DBMS_RESULT_CACHE.MEMORY_REPORT(TRUE);
R e s u l t   C a c h e   M e m o r y   R e p o r t
[Parameters]
Block Size          = 0 bytes
Maximum Cache Size  = 0 bytes (0 blocks)
Maximum Result Size = 0 bytes (0 blocks)
[Memory]
Total Memory = 5184 bytes [0.005% of the Shared Pool]
... Fixed Memory = 5184 bytes [0.005% of the Shared Pool]
....... Memory Mgr = 132 bytes
....... Cache Mgr  = 152 bytes
....... Bloom Fltr = 2K bytes
....... State Objs = 2852 bytes
... Dynamic Memory = 0 bytes [0.000% of the Shared Pool]

PL/SQL procedure successfully completed.




28 March 2014

RMAN-04014: startup failed: ORA-00845: MEMORY_TARGET not supported on this system

Issue description
when I did standby creation on other host by using rman duplicate command I've got an error like:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 03/27/2014 15:41:59
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-00845: MEMORY_TARGET not supported on this system

was really surprised because the standby was configured with less SGA memory than actual primary like

32Gb - SGA* on Production
4 Gb - SGA* on Standby
* - as for MEMORY_TARGET parameter

Reason for the Issue
the problem was in system configuration

[root@sby01 ~]# df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_sby01-lv_root   53G   18G   34G  35% /
tmpfs                              3.0G  30M  2.9G   1% /dev/shm
/dev/vda1                          485M  100M  360M  22% /boot
10.1.1.5:/u02                     734G  494G  240G  68% /u02

Workaround
[root@sby01 ~]# vi /etc/fstab

#
# /etc/fstab
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_sby01-lv_root /                       ext4    defaults        1 1
UUID=ae17aee-972b-4f91-806b-f3e33d29a0 /boot                   ext4    defaults        1 2
/dev/mapper/vg_mtksby01p2-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults,size=8g        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

I've allocated 8Gb for tmpfs

[root@sby01 ~]# mount -o remount /dev/shm