Tuesday, 5 April 2016

Long DROP USER command

Lately I called a DROP USER command - as this was a slow environment (and I once called this command on the same user previously) I expected it to take long time. The working hours passed by, I was observing it - it waited on the "db file sequential read", so I did not bother to check the db health at all - I went home.
The next day I saw the command failed with error, that there are no temporary files.

Tuesday, 1 March 2016

Using wallet

  1. wallet creation
    # I usally place it under TNS_ADMIN or one level below in a separate 
    # directory e.g. TNS_ADMIN=/oracle/admin/network/admin and then wallet
    # is in /oracle/admin/network/wallet
    # the call requires to specify a password twice, which is a global 
    # password to the wallet
    $ mkstore -wrl /oracle/admin/network/wallet/my.wallet -create
    Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
    Copyright (c) 2004, 2013, Oracle and/or its affiliates. 
    All rights reserved.
    
    Enter password:            
       
    Enter password again:            
       
    
  2. adding entries to the wallet
    • creating an entry in the tnsnames.ora - as one will connect through that entry without password
      alias_in_tns = (DESCRIPTION = 
        (ADDRESS = (PROTOCOL = TCP) (HOST = host1) (PORT = 1521))
        (CONNECT_DATA = (SERVICE_NAME = service1)))
      
    • creating credentials for user1 in the wallet (db_user1 is TNS entry, user1 the schema name, pass is a password)
      [oracle@ap-murex1t admin]$ mkstore -wrl [path to wallet directory] \
       -createCredential alias_in_tnsnames user1 pass
      Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
      Copyright (c) 2004, 2013, Oracle and/or its affiliates. 
      All rights reserved.
      
      Enter wallet password:            
         
      Create credential oracle.security.client.connect_string1
      

The default privileges on the wallet directory are set only for the wallet owner, so other users would get the ORA-12578 and be asked for credentials. Of course this has a perfect sense to limit access to the wallet. However sometimes You need it on a test environment and may abuse it simply by setting permissions also for others.
The whole thing works of course only if one provide correct information in the sqlnet.ora file.
WALLET_LOCATION = (SOURCE = 
  (METHOD = FILE)(METHOD_DATA = (DIRECTORY = [path to wallet directory])))
SQLNET.WALLET_OVERRIDE = TRUE 

Friday, 19 February 2016

minact-scn: useg scan erroring out with error e:12751

Lately I've met in alert.log entries the following:
minact-scn: useg scan erroring out with error e:12751
According to the Metalink it is bound with high load on the system and that an undo segment scan has failed with an ORA-12751 error.
And ORA-12751 is described as "cpu time or run time policy violation"

In my case though the load on the system is not very high and the system is actually used currently for one task only, which is a massive data pump import. The import leads to temporary lack of space for archivelogs (ORA-16038, ORA-19504, ORA-16014). Of course there is a pressure on the instance trying to save archivelogs and I suppose that this state causes such notes in alert.log.

Also emergency flushes occur then:
Active Session History (ASH) performed an emergency flush. This may mean that ASH is undersized. If emergency flushes are a recurring issue, you may consider increasing ASH size by setting the value of _ASH_SIZE to a sufficiently large value. Currently, ASH size is 8388608 bytes. Both ASH size and the total number of emergency flushes since instance startup can be monitored by running the following query:
select total_size,awr_flush_emergency_count from v$ash_info; 

ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0

Today I run into the following error:
ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0
What was the most annoying, it was the fact, I can not start the instance due to this error occurence. In general within a screen session I opened sqlplus session and within it the vim for editing the newly generated pfile. Then I shut down the instance and further I edited some memory settings. So far, so good - I started the instance and got this error. And I did not even specified this parameter in the configuration.
Tried to find something on the internet and on the Metalink:
  1. http://askdba.org/weblog/2008/04/tweaking-_shared_pool_reserved_min_alloc-and-ora-4031/
  2. Ora-00828 With Automatic Shared Memory Management (Doc ID 1251784.1)
I have checked all the possibilities - look for mistakes and misspellings in settings specification and all for nothing. Finally the colleague of mine created a pfile, then created a spfile from that pfile (no changes at all), then started instance with force and voila, the instance worked like a charm.
Not sure what was wrong - the only clue is the possibility I introduced into pfile some hidden character, so the true reason was similar to the one described under the 1st link mentioned above.
Such theory is supported by the following facts:
  • the error message as the upper bound displays 0 (which is nonsense) - however I suppose the lower bound is simply hard coded as 4000 (or 4400) bytes and the upper bound derives its value from the calculation based on the value of shared pool size.
  • assuming something was wrong with the shared pool size specification I suppose it would be set to 0 and hence the upper bound also would be 0 (as any % of 0 is still 0 ;-))
The conclusion: if the upper bound of the ORA-00093 message is set to 0, check Your pfile and spfile byte by byte.

Monday, 8 February 2016

Large Pages Implementation in Windows (external link)

Short note on how to enable large pages support for Oracle on Windows - link

Monday, 4 January 2016

About Synergy tool

This time short article somewhat off topic - as it is about a workstation tool and not really bound with databases.

Overview

Synergy is a tool allowing to bound all the workstations one uses to one common set of input peripherals (i.e. mouse and keyboard). It services also transferring the clipboard content between workstations, so greatly enhances the productivity. It is provided for Windows, Linux and Mac OS X, so I would say it covers 99,99% of the workstations' os population.

My architecture

My architecture means 2 laptops - the 1st one run by Linux (current Ubuntu) and the 2nd one run by Windows 7. The package is called synergy in version 1.7.0. and on Windows side I have synergy in version 1.3.4.

My configuration

The synergy on Linux side is configured to be a server. The keyboard and mouse of the Linux workstation is the common input set then. On the Windows the software works as a client. As I have a community version, the communication between workstations would not be secure if I would simply rely on the Synergy itself. One may buy the PRO version to have the transfer secured by SSH and I suppose this is a very convenient solution. Here I provide a "poor-man" solution where synergy communication is transferred through SSH tunnel.

Client and server insecure configuration

For the client I check the 1st radio button (Use another computer's keyboard and mouse) here and specify the IP of the server.
For the server I choose here the Server checkbox and configure interactively - on the interactive configuration window I set the order of the workstations. I believe that the available screens are displayed by checking all the available clients, so I would start with the insecure connection. The connection would be done on the 24800 port.
As You may see this configuration is very easy.

Securing the connection

Now time to secure it. I did it by opening a SSH tunnel from the client to the server, so actually all is done on Windows side (which has some pros and cons). The only thing on the server is a SSH key generation (ssh-keygen -t rsa -b 2048).
First thing on Windows is a download of the Putty SSH client. Then I generated an SSH key (PUTTYGEN.EXE SSH2 RSA algorithm and 2048 bits for length) and save both keys (private and public) to files. Next I put the public key to the server's authorized_keys file to connect by key ($HOME/.ssh/authorized_keys - if it does not exist, just create it with Your favourite editor). To move the key one may use currently working insecure synergy installation :-).
Additional step is to build a batch file to run pageant.exe (which is SSH agent to keep the password in memory to open the key)
REM content of the batch file
start "" c:\app\putty\pageant.exe "C:\Users\{user}\Documents\win2priv.ppk"
This batch file has to be put to Startup group to run just after the Windows initialization.
Further I needed to create an executable to establish a tunnel to the server. I tried to do it with another batch calling plink.exe (another executable from Putty), but it did not work (may be I did something wrong), so I tried configuring a session in Putty - to do it one have to call Putty, create a session to the server and then in Connection->SSH->Tunnels add an entry (Source port: 24800, destination: 127.0.0.1:24800 and options Local and Auto) with the Add button.

Now we are ready to open the connection to the server (beforehand however we need to disable current synergy connection as it is established on the same port). After clicking the Open button one have to set the server IP to 127.0.0.1 and run the synergy client with the Start button.

Important notes

Due to some reason not clear to me usage of Putty failed and I had to exchange the Putty binaries with Kitty binaries (which is pretty much the same). With Kitty whole thing started to work smoothly.
Pros
The connection is secure :-)
Cons for Linux being server
  • during the Windows initialization the SSH agent will ask for the password to the SSH key
  • one have to open a tunnel manually - You may try with plink.exe (klink.exe from Kitty) - this probably may be replaced with some Powershell script, but I have no ability to manipulate the Powershell scripts here
Cons for Windows being server
  • on the Linux side all things may be nicely automated - no problem with opening a tunnel in background
  • one have to install and run an SSH server on the Windows side

Friday, 11 December 2015

A note about Data pump behaviour while blocked

Lately I manage several db environments, on which I perform frequent loads with the Data Pump tools. As those loads are sometimes quite large (up to 0.5T), I constantly meet the situation, when the space designated for archivelogs is filled completely, which in turn causes archival process to stop.
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance YYYYYY - Archival Error
ORA-16038: log 1 sequence# 3208 cannot be archived
ORA-19504: failed to create file ""
ORA-00312: online log 1 thread 1: '+RECO/YYYYYY/onlinelog/group_1.257.895415461'
ORA-00312: online log 1 thread 1: '+DATA/YYYYYY/onlinelog/group_1.257.895415461'
Fri Dec 11 11:03:35 2015
Archiver process freed from errors. No longer stopped
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance YYYYYY - Archival Error
ORA-16014: log 1 sequence# 3208 not archived, no available destinations
I used to load data to databases in the NOARCHIVELOG mode, where there is no such issue at all. In many cases the archival break means that a Data Pump job stops for a time, until the space for archivelogs is freed and available for future archivelogs. Thus it does not matter when the space will be freed (by backup).
But there are scenarios, when I have faced the following behaviour. When a Data Pump job meets the archival error, it fails with the following:
ORA-39065: unexpected master process exception in RECEIVE
ORA-39078: unable to dequeue message for agent MCP from 
  queue "KUPC$C_1_20151210094644"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPC$QUE_INT", line 610
ORA-04021: timeout occurred while waiting to lock object
Job "ZZZZZZ"."SYS_IMPORT_SCHEMA_01" stopped due to fatal error 
  at Thu Dec 10 11:54:39 2015 elapsed 0 02:07:55
It seems to me that such situations are possible, when parallel job slaves are run. They die, waiting for some object to lock and possibly it is simply the problem with communication between the slaves and the master, when db freeze disables this communication. However this is not a final error. Facing this one may attach to the Data Pump session. It seems it freezes and does not recover automatically, when the work is possible again. In such situation the user has to call CONTINUE command - then the job recreates the slaves and the job steps further.
Job SYS_IMPORT_SCHEMA_01 has been reopened at Thu Dec 10 12:23:51 2015
Restarting "ZZZZZZ"."SYS_IMPORT_SCHEMA_01":  ZZZZZZ/********
  parfile=ZZZZZZ.par