Monday, 4 January 2016

About Synergy tool

This time short article somewhat off topic - as it is about a workstation tool and not really bound with databases.

Overview

Synergy is a tool allowing to bound all the workstations one uses to one common set of input peripherals (i.e. mouse and keyboard). It services also transferring the clipboard content between workstations, so greatly enhances the productivity. It is provided for Windows, Linux and Mac OS X, so I would say it covers 99,99% of the workstations' os population.

My architecture

My architecture means 2 laptops - the 1st one run by Linux (current Ubuntu) and the 2nd one run by Windows 7. The package is called synergy in version 1.7.0. and on Windows side I have synergy in version 1.3.4.

My configuration

The synergy on Linux side is configured to be a server. The keyboard and mouse of the Linux workstation is the common input set then. On the Windows the software works as a client. As I have a community version, the communication between workstations would not be secure if I would simply rely on the Synergy itself. One may buy the PRO version to have the transfer secured by SSH and I suppose this is a very convenient solution. Here I provide a "poor-man" solution where synergy communication is transferred through SSH tunnel.

Client and server insecure configuration

For the client I check the 1st radio button (Use another computer's keyboard and mouse) here and specify the IP of the server.
For the server I choose here the Server checkbox and configure interactively - on the interactive configuration window I set the order of the workstations. I believe that the available screens are displayed by checking all the available clients, so I would start with the insecure connection. The connection would be done on the 24800 port.
As You may see this configuration is very easy.

Securing the connection

Now time to secure it. I did it by opening a SSH tunnel from the client to the server, so actually all is done on Windows side (which has some pros and cons). The only thing on the server is a SSH key generation (ssh-keygen -t rsa -b 2048).
First thing on Windows is a download of the Putty SSH client. Then I generated an SSH key (PUTTYGEN.EXE SSH2 RSA algorithm and 2048 bits for length) and save both keys (private and public) to files. Next I put the public key to the server's authorized_keys file to connect by key ($HOME/.ssh/authorized_keys - if it does not exist, just create it with Your favourite editor). To move the key one may use currently working insecure synergy installation :-).
Additional step is to build a batch file to run pageant.exe (which is SSH agent to keep the password in memory to open the key)
REM content of the batch file
start "" c:\app\putty\pageant.exe "C:\Users\{user}\Documents\win2priv.ppk"
This batch file has to be put to Startup group to run just after the Windows initialization.
Further I needed to create an executable to establish a tunnel to the server. I tried to do it with another batch calling plink.exe (another executable from Putty), but it did not work (may be I did something wrong), so I tried configuring a session in Putty - to do it one have to call Putty, create a session to the server and then in Connection->SSH->Tunnels add an entry (Source port: 24800, destination: 127.0.0.1:24800 and options Local and Auto) with the Add button.

Now we are ready to open the connection to the server (beforehand however we need to disable current synergy connection as it is established on the same port). After clicking the Open button one have to set the server IP to 127.0.0.1 and run the synergy client with the Start button.

Important notes

Due to some reason not clear to me usage of Putty failed and I had to exchange the Putty binaries with Kitty binaries (which is pretty much the same). With Kitty whole thing started to work smoothly.
Pros
The connection is secure :-)
Cons for Linux being server
  • during the Windows initialization the SSH agent will ask for the password to the SSH key
  • one have to open a tunnel manually - You may try with plink.exe (klink.exe from Kitty) - this probably may be replaced with some Powershell script, but I have no ability to manipulate the Powershell scripts here
Cons for Windows being server
  • on the Linux side all things may be nicely automated - no problem with opening a tunnel in background
  • one have to install and run an SSH server on the Windows side

Friday, 11 December 2015

A note about Data pump behaviour while blocked

Lately I manage several db environments, on which I perform frequent loads with the Data Pump tools. As those loads are sometimes quite large (up to 0.5T), I constantly meet the situation, when the space designated for archivelogs is filled completely, which in turn causes archival process to stop.
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance YYYYYY - Archival Error
ORA-16038: log 1 sequence# 3208 cannot be archived
ORA-19504: failed to create file ""
ORA-00312: online log 1 thread 1: '+RECO/YYYYYY/onlinelog/group_1.257.895415461'
ORA-00312: online log 1 thread 1: '+DATA/YYYYYY/onlinelog/group_1.257.895415461'
Fri Dec 11 11:03:35 2015
Archiver process freed from errors. No longer stopped
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance YYYYYY - Archival Error
ORA-16014: log 1 sequence# 3208 not archived, no available destinations
I used to load data to databases in the NOARCHIVELOG mode, where there is no such issue at all. In many cases the archival break means that a Data Pump job stops for a time, until the space for archivelogs is freed and available for future archivelogs. Thus it does not matter when the space will be freed (by backup).
But there are scenarios, when I have faced the following behaviour. When a Data Pump job meets the archival error, it fails with the following:
ORA-39065: unexpected master process exception in RECEIVE
ORA-39078: unable to dequeue message for agent MCP from 
  queue "KUPC$C_1_20151210094644"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPC$QUE_INT", line 610
ORA-04021: timeout occurred while waiting to lock object
Job "ZZZZZZ"."SYS_IMPORT_SCHEMA_01" stopped due to fatal error 
  at Thu Dec 10 11:54:39 2015 elapsed 0 02:07:55
It seems to me that such situations are possible, when parallel job slaves are run. They die, waiting for some object to lock and possibly it is simply the problem with communication between the slaves and the master, when db freeze disables this communication. However this is not a final error. Facing this one may attach to the Data Pump session. It seems it freezes and does not recover automatically, when the work is possible again. In such situation the user has to call CONTINUE command - then the job recreates the slaves and the job steps further.
Job SYS_IMPORT_SCHEMA_01 has been reopened at Thu Dec 10 12:23:51 2015
Restarting "ZZZZZZ"."SYS_IMPORT_SCHEMA_01":  ZZZZZZ/********
  parfile=ZZZZZZ.par

Monday, 19 October 2015

Silent installation of client

There are actually 2 components of such installation: response file, which consists all the arguments for installation and shell call

Response file

Below example of a response file
oracle.install.responseFileVersion=/oracle/install/rspfmt_clientinstall_response_schema_v11_2_0

SELECTED_LANGUAGES=en
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oracle/oraInventory
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/client_1
ORACLE_BASE=/u01/app/oracle

oracle.install.client.installType=Custom
oracle.install.client.upgrading=false
## I cut the line below for better display
oracle.install.client.customComponents="oracle.sqlj:11.2.0.4.0","oracle.rdbms.util:11.2.0.4.0","oracle.javavm.client:11.2.0.4.0","oracle.sqlplus:11.2.0.4.0","oracle.dbjava.jdbc:11.2.0.4.0","oracle.ldap.client:11.2.0.4.0","oracle.rdbms.oci:11.2.0.4.0","oracle.precomp:11.2.0.4.0","oracle.xdk:11.2.0.4.0","oracle.network.aso:11.2.0.4.0","oracle.assistants.oemlt:11.2.0.4.0","oracle.oraolap.mgmt:11.2.0.4.0","oracle.network.client:11.2.0.4.0","oracle.network.cman:11.2.0.4.0","oracle.network.listener:11.2.0.4.0","oracle.ordim.client:11.2.0.4.0","oracle.odbc:11.2.0.4.0","oracle.has.client:11.2.0.4.0","oracle.dbdev:11.2.0.4.0","oracle.rdbms.scheduler:11.2.0.4.0"

oracle.install.client.schedulerAgentHostName=10.0.54.2
oracle.install.client.schedulerAgentPortNumber=1025
oracle.installer.autoupdates.option=

oracle.installer.autoupdates.downloadUpdatesLoc=

AUTOUPDATES_MYORACLESUPPORT_USERNAME=
AUTOUPDATES_MYORACLESUPPORT_PASSWORD=

PROXY_HOST=
PROXY_PORT=
PROXY_USER=
PROXY_PWD=
PROXY_REALM=
The example above is excerpt from the example file provided by Oracle without comments and a number of other options. The most important here is the Oracle base and home locations and inventory location.
The inventory should be placed in some independent directory, where one may change permissions without affecting the other binaries trees. This is especially important if one plans to install more products as all of them require access to such inventory and may be owned by different users.

Shell call

There is several options to the runInstaller exec. The most simple call looks like this:
./runInstaller -silent -responseFile 
Sometimes -ignorePrereq is useful if one knows, what one's doing.

Pros and cons

The response file preparation may become kind of trouble, but it is worth the effort - after all it is well to apply some convention and such file may become kind of policy document in the client installation regard.
The most important is the installation is way faster than with GUI.

Tuesday, 22 September 2015

Trouble with an installation on AIX

This time I have got quite a simple task - to install Oracle database binaries on a host with AIX. Work as usual.
But after copying files the installer process displayed an error. Below excerpt from the log.
ld: 0711-866 INTERNAL ERROR: Output symbol table size miscalculated.

INFO: ld: 0711-759 INTERNAL ERROR: Source file xoutxx.c, line 574.
        Depending on where this product was acquired, contact your service
        representative or the approved supplier.

INFO: make: 1254-004 The error code from the last command is 16.


Stop.

INFO: make: 1254-004 The error code from the last command is 2.


Stop.

INFO: End output from spawned process.
INFO: ----------------------------------
INFO: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'links proc gen_pcscfg' of makefile 
      '/oracle/app/product/11.2.0.2/precomp/lib/ins_precomp.mk'. See 
      '/oracle/oraInventory/logs/installActions2015-09-22_08-11-37AM.log' for details.
Exception Severity: 1


As this is 11.2.0.2 version and the AIX is 7.1 I was afraid this may be due to some lacks in system filesets or so. It seems not so - all the details You may find in "AIX: ERROR IN INVOKING TARGET 'LINKS PROC GEN_PCSCFG' OF MAKEFILE, ld: 0711-593 SEVERE ERROR: Symbol C_BSTAT (Doc ID 985563.1)" on the Metalink.

Wednesday, 9 September 2015

ORA-32598

Today we hit (while creating a test environment) ORA-32598: user names cannot be changed in ALTER DATABASE LINK command. An element of creating a test environment is the password change for users and accompanying change in db links' passwords.
As this is the 11g version, we are able to leverage the command ALTER DATABASE LINK.
We run something like:
ALTER DATABASE LINK link1 CONNECT TO user1 identified by "user1test";
And the db engine complains about changing user name, but the name seems perfectly fine. We know that this is copied from a production environment where this link works properly. But wait...
What about the etarnal problem with double quotes and names?
This is the diagnosis and solution is clear then - the db link creator in his/her command used lower case letters and enveloped the name in double quotes. On the production it works even if the user on a remote db is named traditionally (i.e. its name is stored in upper case) as apparently the connect string on the remote side is resolved as usually i.e. in a case-insensitive way.
But on the local database the user name is in lower case so if one does not use double quotes, the engine does not find the match between user already set and the name from the command. Thus the proper command should look like the following:
ALTER DATABASE LINK link1 CONNECT TO "user1" identified by "user1test";
And now all works fine...

After OS upgrade ...

The only important thing after the OS upgrade on a host with Oracle db is to relink binaries.
## before it is good to find all the files owned by root
## after the relink operation they would become again owned by oracle
[oracle@host1 ~]$ find $ORACLE_HOME -user root -exec ls -l {} \;
-rw-r-----   1 root     oinstall    1534 Dec 21  2005 /oracle/product/11.2.0.3/rdbms/admin/externaljob.ora
-rwsr-x---   1 root     oinstall 1340408 Sep 25  2011 /oracle/product/11.2.0.3/bin/oradism
-rws--x---   1 root     oinstall   29216 Jun 21  2013 /oracle/product/11.2.0.3/bin/nmo
-rws--x---   1 root     oinstall   21624 Jun 21  2013 /oracle/product/11.2.0.3/bin/nmb
-rwsr-x---   1 root     oinstall 1277904 Jun 21  2013 /oracle/product/11.2.0.3/bin/extjob
-rws--x---   1 root     oinstall   93296 Jun 21  2013 /oracle/product/11.2.0.3/bin/nmhs
-rwsr-x---   1 root     oinstall   26824 Jun 21  2013 /oracle/product/11.2.0.3/bin/jssu


[oracle@host1 ~]$ cd $ORACLE_HOME
[oracle@host1 ~]$ ./bin/relink all

## after the relink operation some of binaries, which have to be owned by root are 
## again owned by oracle; after the installation the permissions are set by root.sh
## now it is enough to run the commands below, as root.sh does more things

[oracle@host1 ~]$ chown root $ORACLE_HOME/rdbms/admin/externaljob.ora
[oracle@host1 ~]$ chmod 640 $ORACLE_HOME/rdbms/admin/externaljob.ora
[oracle@host1 ~]$ chown root $ORACLE_HOME/bin/oradism
[oracle@host1 ~]$ chmod 4750 $ORACLE_HOME/bin/oradism
[oracle@host1 ~]$ chown root $ORACLE_HOME/bin/nmo
[oracle@host1 ~]$ chmod 4710 $ORACLE_HOME/bin/nmo 
[oracle@host1 ~]$ chown root $ORACLE_HOME/bin/nmb
[oracle@host1 ~]$ chmod 4710 $ORACLE_HOME/bin/nmb
[oracle@host1 ~]$ chown root $ORACLE_HOME/bin/extjob
[oracle@host1 ~]$ chmod 4750 $ORACLE_HOME/bin/extjob
[oracle@host1 ~]$ chown root $ORACLE_HOME/bin/nmhs
[oracle@host1 ~]$ chmod 4710 $ORACLE_HOME/bin/nmhs
[oracle@host1 ~]$ chown root $ORACLE_HOME/bin/jssu
[oracle@host1 ~]$ chmod 4750 $ORACLE_HOME/bin/jssu

Monday, 31 August 2015

Error on auto execute of job SYS.ORA$_AT_SA_SPC_SY (external link)

Today I've got a ticket to service with the content more or less like the following:
ORA-12012: error on auto execute of job "SYS"."ORA$AT_SA_SPC_SY_1202"
ORA-20000: ORU-10027: buffer overflow, limit of 20000 bytes
ORA-06512: at "SYS.DBMS_ADVISOR", line 201
ORA-06512: at "SYS.DBMS_SPACE", line 2480
ORA-06512: at "SYS.DBMS_SPACE", line 2553

The issue is not a big deal for me - a problem with space advisor is not a critical thing IMHO, but by the way I found short nice notice on the subject http://qdosmsq.dunbar-it.co.uk/blog/2013/01/oraat_sa_spc_sy-jobs-failing/