Uncover free hand-drawn like sketches for DBAs

As you could see in my blog post as well as presentations of the last months, I use Excalidraw for drawing hand-drawn like diagrams for a while now. A colleague at work made me aware of the tool. A colleague at work made me aware of the tool. And since the first use of Excalidraw, I am thrilled.

But what is Excalidraw? It is a virtual whiteboard for sketching hand-drawn like diagrams. Collaborative and end-to-end encrypted. A perfect companion for virtual trainings, workshops and more. The tool is simple and intuitive. Within a few minutes you can create your first diagrams. There are also a number of public libraries available.

Example sketch of an Oracle DB Point in Time Recovery

The Tool

Excalidraw is a virtual whiteboard for sketching hand-drawn like diagrams. The blog post One Year of Excalidraw does provide a good overview about the technology stack behind it as well the features available. As an end user, you use Excalidraw as a web app or alternatively embedded in an application such as Visual Studio Code (Excalidraw Extension)

Below a list of the coolest features:

  • Collaboration work together on a sketch
  • Various translations
  • File system integration and file handling
  • Dark Mode
  • Encrypted Data, by using end-to-end encryption
  • Excalidraw public libraries
  • Excalicharts, Lines, Arrows, Grid and Stats and much more
  • Visual Studio Code Excalidraw Extension
Visual Studio Code with the Excalidraw Extension

OraDBA Sketches

After half a year of virtual trainings, workshops and presentations, I have been drawing a bunch of my diagrams using Excalidraw. Usually I do store them as offline files in the corresponding project folder or git repository. Nevertheless, I have a few sketches, which I use again and again. Since it is not always practical to create an Excalidraw library, I have created a GitHub repository with various sketches. With the combination of Visual Studio Code, GIT and the Excalidraw Extension, it is thus relatively easy to access existing sketches. The corresponding GitHub repository is available via oehrlis/sketches. At the moment there are sketches available for the following topics.

  • backup Sketches all around the topic of backup & recovery. Mainly backup & recovery use cases for Oracle databases.
  • libraries A couple of Excalidraw libraries which have not yet officially be published in the Excalidraw library repository.
  • misc Miscellaneous Sketches about different topic. Basically all sketches that are not or not yet stored somewhere else.
  • oci Sketches all around the topic of Oracle Cloud Infrastructure (OCI) including LAB setup, Terraform and more.
  • security Sketches all around the topic of Oracle Database Security.

The sketches can be used as ideas, templates or in any other form. If you use the sketches in an official document, please include an appropriate reference / source. I would definitely appreciate any contributions to this GitHub repository, e.g. comments, stars, issues, pull requests etc.

References

Summary of the most important links and references.

How to easily back up photos when traveling!

After a long time with work and vacations at home, is now soon again vacation time. The ideal time to travel more and discover new places. For me, that includes taking my dusty photography equipment with me. However, it’s not quite like it used to be. The days are gone when I carried a bag full of slide films across South America. Hence also the challenges changed. I no longer have to worry about my films being inspected by hand. In return, I now have to take care of a pile of data. And finally, backup & recovery is one of the many topics that keep me busy in my work.

The problem: How to backup photos while traveling?

The Problem

With modern digital cameras and high resolutions, a few GB of data can be generated very quickly. Depending on the model, this can be several MB or even up to 100MB per image in RAW format. How to become a master of this data flood? What options do I have?

  • Amount of Images: The first thing you should do is to take pictures consciously and not just snap them. 🙄
  • Storage Options:
    • Take enough SD cards with you…
    • Take large enough SD Cards with you…
    • Backup SD cards to use them several times…

So just take a huge amount of SD cards with me I do not want. On the one hand, it goes into the money, on the other hand, such a SD card can also break. Losing a 128GB SD Card with half the pictures of you trip is not funny. So what remains is the regular backup. Maybe you might think, no problem, you can do it on your notebook. Nope. I do work in IT as a consultant and carry 1-2 notebooks around with me all year. So the first thing I don’t take on vacation is the notebook.

A few Ideas

So what options remain to backup data on a 6 week long trip?

  • Option 1: Upload the pictures on a regular basis to a cloud storage e.g. Dropbox, OneDrive, iCloud or you private NAS with remote access. But here you need a camera or another device with which you can upload the data via internet. But what about the network speed? Roaming? With 200-300GB you will surely get desperate.
  • Option 2: Come on, just take your laptop with you…
  • Option 3: Copy you pictures to you tablet or phone. Mmh yes, but who has 500 GB of free space?
  • Option 4: Use one of these nice HDD / SSD with a built-in SD card reader. For example:
    • LaCie Rugged BOSS SSD with SD card reader, battery etc. this allows on the file copies of you SD cards. Actually a cool solution just a bit expensive.
    • LaCie Rugged RAID Pro with 4TB and a SD card reader. Unfortunately this one requires a device to copy the data. When using with a tablet you also have to power the HDD with a power supply.
    • WD My Passport wireless to copy the data onto.
    • Build you own Rugged BOSS with a Raspberry Pi
  • Option 5: A regular HDD / SSD and a SD card reader with a tablet or phone

Which option would you prefer? I first though about Option 4 and to buy a LaCie Rugged BOSS SSD. But then this was too proprietary and too expensive for me. So I decided to go with option 5. Especially since I already have most of the components.

The Solution

The Solution: SD card, SSD and iPad to backup

My photo gear packing list will be completed for the trip as follows:

  • Apple iPad Pro. Could also by an iPhone or any other tablet / phone. It either requires an USB-C port or a different hub.
  • Satechi 6in1 Hub / Dock with a SD card reader, USB port and the corresponding cable for the iPad. Could also be any other hub.
  • Sandisk Extreme Pro Portable: I do use a 2TB model with Apple APFS and encryption. The lates iOS devices can also read APFS. This way I can keep my disk secure.
  • USB-C cable for the SSD
  • SD cards for the camera

Yeah but how you do the backup now? I do plug all together. As the iPad can handle external storage devices, you will be able to read / access the SD Card as well the SSD. You can either user the File App or any other App which handles files. In the following picture you see the File App with the SSD (traveler) and the SD card (Leica M). To get access to the SSD with APFS and encryption, I only had to enter the password to unlock.

Apple iOS File App

Within the File App you then can copy individual each picture or the whole content of the SD card. If you enjoy moving folders or individual images with your finger, it is quite ok. It gets complicated if you do not delete the SD card again and again and thus copy the existing images again. To make this a bit faster, rsync would not be bad. However, this is not available on the iPad. But there is an App called Sync Folder Pro. With this app you can define simple tasks to synchronise two directories e.g. SD Card with SSD.

Sync Folder Pro synchronising SD card with SSD backup folder

In my case I just did create a task for the two folder with the sync mode A+>B. This does update the backup folder with the new content of the SD card.

My workflow now looks like this:

  1. Put everything together
  2. Unlock encrypted SSD
  3. Start sync job to update the backup on SSD with the latest photos from any of my SD cards
  4. Optional start to review a few pictures using the File App and/or importing some of them into Lightroom on iPad

Conclusion

So the tests at home on the desk were successful. What is still missing is the real use in the coming summer vacations. But I am confident that this will work out well. In the event that I do have fast wifi, I will of course back up data to my NAS via wifi. To be on the safe side, I’ll take my analog Leica M6 with a few films. It has already passed its toughest test. In addition, I know that with this camera can definitely take home a few pictures. The problem is then rather to develop the films at home….

How to write Unified Audit Trail Records to SYSLOG 

With the introduction of Oracle Unified Audit, Oracle has completely redesigned the process of logging audit events. With the new unified audit trail, there is only one place where audit records are stored. Ok, the audit trail exists per PDB and for a read only database additionally somehow as a binary overflow file. However, the times are over in which we had to deal with AUD$, FGA_LOG$ or DVSYS.AUDIT_TRAIL$ tables and all kinds of audit files. Everything is all right, isn’t it?

Unfortunately not quite. Oracle left out the possibility to write audit records to SYSLOG in the first implementations of Oracle Unified Audit. With the release Oracle 18c later 19c this functionality was added again step by step. But before you get too excited and start switching the audit trail back to SYSLOG, you need to take a look at one or two things. The new initialisation parameters UNIFIED_AUDIT_SYSTEMLOG and UNIFIED_AUDIT_COMMON_SYSTEMLOG do not work quite the same as AUDIT_SYSLOG_LEVEL used to. In this blog post, we’ll take a closer look at audit configuration in the context of SYSLOG.

Background

Even with the use of the SYSLOG configuration for Oracle Unified Audit, the Unified Audit Infrastructure as well as the audit policies must be configured as before. In this respect, nothing changes with SYSLOG forwarding. With the new initialisation parameters below, Oracle only enables that the audit records are additionally forwarded to SYSLOG in a reduced form.

  • UNIFIED_AUDIT_SYSTEMLOG specifies whether key fields of unified audit records will be written to the SYSLOG utility (on UNIX platforms) or to the Windows Event Viewer (on Windows). In a CDB, this parameter is a per-PDB static initialisation parameter. This parameter is available from Oracle 18c.
  • UNIFIED_AUDIT_COMMON_SYSTEMLOG specifies whether key fields of unified audit records generated due to common audit policies will be written to the SYSLOG utility. This parameter is available only from Oracle 19c.

The following sketch shows the schematic structure of the unified audit configuration with ¡ forwarding in an Oracle Multitenant Database. Whereby the following has been configured:

  • Common audit policy COMMON_ORA_LOGON for audit action LOGON
  • Local audit policy LOCAL_ORA_LOGON for audit action LOGON
  • UNIFIED_AUDIT_SYSTEMLOG parameter set to SYSLOG facility local0.info
  • UNIFIED_AUDIT_COMMON_SYSTEMLOG parameter set to SYSLOG facility local1.info
Unified Audit with SYSLOG configuration in an Oracle Multitenant Database

The following use cases are drawn in the sketch:

Common User Login to CDB$ROOT

  1. A common user is logged into the CDB$ROOT and COMMON_ORA_LOGON does create an audit event for audit action LOGON
  2. A single audit record is created in the local UNIFIED_AUDIT_TRAIL of CDB$ROOT
  3. Additionally An audit record is created in the SYSLOG facility local1.warning

Common User Login to PDB01

  1. A common user is logged into the PDB01 and COMMON_ORA_LOGON as well LOCAL_ORA_LOGON does create an audit event for audit action LOGON
  2. A single audit record is created in the local UNIFIED_AUDIT_TRAIL of PDB01 
  3. Additionally an audit record is created in the SYSLOG facility local1.info

Local User Login to PDB01

  1. A local user is logged into the PDB02 and LOCAL_ORA_LOGON does create an audit event for audit action LOGON
  2. A single audit record is created in the local UNIFIED_AUDIT_TRAIL of PDB02
  3. Additionally an audit record is created in the SYSLOG facility local0.warning

Note: An audit record is created in UNIFIED_AUDIT_TRAIL as well as in SYSLOG. This means that you also have to define a corresponding housekeeping for UNIFIED_AUDIT_TRAIL.

Setup Unified Audit with SYSLOG Integration

Configure SYSLOG

First we do have to configure corresponding SYSLOG destinations for our database audit information. In the following I will not go into the detailed configuration of SYSLOG respectively RSYSLOG. We only extend the configuration with two additional log files. The first thing to do is to edit the /etc/rsyslog.conf file as root user. We add two new local facilities under the RULES section.

sudo vi /etc/rsyslog.conf

# Unified Audit Rules
local0.info            /var/log/oracle_common_audit_records.log
local1.info            /var/log/oracle_audit_records.log

Afterwards the RSYSLOG service must be restarted.

sudo systemctl restart rsyslog.service 

Audit Initialisation Parameters

The Oracle initialisation parameter for the audit configuration requires an instance restart and has to be modified via CDB$ROOT. Below we just change UNIFIED_AUDIT_COMMON_SYSTEMLOG optionally we can also forward local audit records to SYSLOG by setting UNIFIED_AUDIT_SYSTEMLOG.

  • Connect as SYS to CDB$ROOT and change UNIFIED_AUDIT_COMMON_SYSTEMLOG
CONNECT / AS SYSDBA
SHOW PARAMETER unified_audit_common_systemlog
ALTER SYSTEM SET unified_audit_common_systemlog='local0.info' SCOPE=SPFILE;
  • Restart the whole container database
STARTUP FORCE;
SHOW PARAMETER unified_audit_common_systemlog

As of now, audit records for common audit events are forwarded to the appropriate SYSLOG facility.

Audit Policies

For simplicity, we test the audit configuration with a few simple audit policies for the audit action LOGON. Of course, audit policies can be defined for any actions.

  • Create a common audit policy to log all logon events of common users in CDB$ROOT or any PDB.
CONNECT / AS SYSDBA
CREATE AUDIT POLICY common_ora_logon ACTIONS LOGON CONTAINER=ALL;
AUDIT POLICY common_ora_logon;
  • Check which audit policies are enabled.
SET LINESIZE WINDOW
COL policy_name FOR A16
COL entity_name FOR A11
SELECT * FROM audit_unified_enabled_policies;
 
POLICY_NAME	 ENABLED_OPTION  ENTITY_NAME ENTITY_ SUC FAI
---------------- --------------- ----------- ------- --- ---
COMMON_ORA_LOGON BY USER	 ALL USERS   USER    YES YES
  • Create a local audit policy to log all logon events of local users in a particular PDB.
ALTER SESSION SET CONTAINER=pdb1;
CREATE AUDIT POLICY local_ora_logon ACTIONS LOGON;
AUDIT POLICY local_ora_logon;
  • Check which audit policies are enabled. We can now see that the local audit policy as well as the common audit policy from before are active in the PDB.
SET LINESIZE WINDOW
COL policy_name FOR A16
COL entity_name FOR A11
SELECT * FROM audit_unified_enabled_policies;
 
POLICY_NAME	 ENABLED_OPTION  ENTITY_NAME ENTITY_ SUC FAI
---------------- --------------- ----------- ------- --- ---
LOCAL_ORA_LOGON  BY USER	 ALL USERS   USER    YES YES
COMMON_ORA_LOGON BY USER	 ALL USERS   USER    YES YES

Test Audit Configuration

Let’s purge the audit trail in CDB$ROOT as well PDB1 have not too much information in the trail.

CONNECT / AS SYSDBA
BEGIN
    dbms_audit_mgmt.clean_audit_trail(
    audit_trail_type         =>  dbms_audit_mgmt.audit_trail_unified,
    use_last_arch_timestamp  =>  FALSE);
END;
/
 
ALTER SESSION SET container=PDB1;
BEGIN
    dbms_audit_mgmt.clean_audit_trail(
    audit_trail_type         =>  dbms_audit_mgmt.audit_trail_unified,
    use_last_arch_timestamp  =>  FALSE);
END;
/

First we do a login as user SYSTEM to CDB$ROOT of database TDB19C

sqlplus system@TDB19C

Lets check what we do see in the view UNIFIED_AUDIT_TRAIL

SET LINESIZE 160 PAGESIZE 200
COL event_timestamp FOR A17
COL dbusername FOR A10
COL action_name FOR A17
COL return_code FOR 999999
COL unified_audit_policies FOR A30
 
SELECT
    to_char(event_timestamp,'dd.mm.yy hh24:mi:ss') event_timestamp,
    sessionid,
    dbusername,
    action_name,
    return_code,
    unified_audit_policies
FROM unified_audit_trail
ORDER BY event_timestamp;
 
EVENT_TIMESTAMP    SESSIONID DBUSERNAME ACTION_NAME	  RETURN_CODE UNIFIED_AUDIT_POLICIES
----------------- ---------- ---------- ----------------- ----------- ------------------------------
22.03.22 09:07:03 1430860507 SYS	LOGOFF BY CLEANUP	    0
23.03.22 14:49:24 2404020191 SYS	EXECUTE 		    0
23.03.22 14:49:44 2578688223 SYSTEM	LOGON			    0 COMMON_ORA_LOGON

Lets see what we do have in the SYSLOG log file

host sudo grep -i 2578688223 /var/log/oracle_common_audit_records.log
Mar 23 14:49:44 localhost journal: Oracle Unified Audit[17838]: LENGTH: '204' TYPE:"4" DBID:"1612911514" SESID:"2578688223" CLIENTID:"" ENTRYID:"1" STMTID:"1" DBUSER:"SYSTEM" CURUSER:"SYSTEM" ACTION:"100" RETCODE:"0" SCHEMA:"" OBJNAME:"" PDB_GUID:"86B637B62FDF7A65E053F706E80A27CA"

The action number can be locked up in the table AUDIT_ACTIONS

SELECT * FROM audit_actions WHERE action=100;      

    ACTION NAME
---------- ----------------------------
       100 LOGON

Now lets see what happens when we login as SYSTEM into PDB1

CONNECT system@pdb1.trivadislabs.com

We now do see an audit record in the UNIFIED_AUDIT_TRAIL of the PDB. Active Policy for this common user is LOCAL_ORA_LOGON and COMMON_ORA_LOGON.

SET LINESIZE 160 PAGESIZE 200
COL event_timestamp FOR A17
COL dbusername FOR A10
COL action_name FOR A11
COL return_code FOR 999999
COL unified_audit_policies FOR A33
 
SELECT
    to_char(event_timestamp,'dd.mm.yy hh24:mi:ss') event_timestamp,
    sessionid,
    dbusername,
    action_name,
    return_code,
    unified_audit_policies
FROM unified_audit_trail
ORDER BY event_timestamp;

EVENT_TIMESTAMP    SESSIONID DBUSERNAME ACTION_NAME RETURN_CODE UNIFIED_AUDIT_POLICIES
----------------- ---------- ---------- ----------- ----------- ---------------------------------
23.03.22 14:54:22 1216566979 SYS	EXECUTE 	      0
23.03.22 14:55:01 3827730564 SYSTEM	LOGON		      0 LOCAL_ORA_LOGON, COMMON_ORA_LOGON

Lets see what we do have in the SYSLOG logfile

host sudo grep -i 3827730564 /var/log/oracle_common_audit_records.log
Mar 23 14:55:01 localhost journal: Oracle Unified Audit[18210]: LENGTH: '203' TYPE:"4" DBID:"817014372" SESID:"3827730564" CLIENTID:"" ENTRYID:"1" STMTID:"1" DBUSER:"SYSTEM" CURUSER:"SYSTEM" ACTION:"100" RETCODE:"0" SCHEMA:"" OBJNAME:"" PDB_GUID:"B8E3D716A96C1507E0530100007F363B"

As a final test, we log into PDB1 as local user SCOTT.

connect scott/tiger@pdb1.trivadislabs.com

There is now a new audit record for SCOTT in the UNIFIED_AUDIT_TRAIL of the PDB. Active policy for this local user is LOCAL_ORA_LOGON.

SET LINESIZE 160 PAGESIZE 200
COL event_timestamp FOR A17
COL dbusername FOR A10
COL action_name FOR A11
COL return_code FOR 999999
COL unified_audit_policies FOR A33
 
SELECT
    to_char(event_timestamp,'dd.mm.yy hh24:mi:ss') event_timestamp,
    sessionid,
    dbusername,
    action_name,
    return_code,
    unified_audit_policies
FROM unified_audit_trail
ORDER BY event_timestamp;

EVENT_TIMESTAMP    SESSIONID DBUSERNAME ACTION_NAME RETURN_CODE UNIFIED_AUDIT_POLICIES
----------------- ---------- ---------- ----------- ----------- ---------------------------------
23.03.22 14:54:22 1216566979 SYS	EXECUTE 	      0
23.03.22 14:55:01 3827730564 SYSTEM	LOGON		      0 LOCAL_ORA_LOGON, COMMON_ORA_LOGON
23.03.22 14:59:26 2954396682 SCOTT	LOGON		      0 LOCAL_ORA_LOGON

Because we have only set the parameter UNIFIED_AUDIT_COMMON_SYSTEMLOG and regular audit policies are not forwarded to SYSLOG, we do not find an entry in the corresponding SYSLOG log file.

host sudo grep -i 2954396682 /var/log/oracle_common_audit_records.log
host sudo grep -i 2954396682 /var/log/oracle_audit_records.log

Conclusion

Although many things have been simplified with Oracle Unified Audit, it is easy to lose Although much has been simplified with Oracle Unified Audit, it is easy to lose track of all the common and local audit policies in an Oracle multitenant environment. An up-to-date audit concept that takes these special cases into account is absolutely essential. This includes the use of the new initialisation parameters for SYSLOG integration. Although the information in SYSLOG is nowhere near as rich as in UNIFIED_AUDIT_TRAIL itself, this feature allows easy integration of Oracle audit events into a central repository, e.g. Splunk, Elastiksearch or similar, to create a simple audit dashboard. The true source of the audit data with information on the complete SQL statements, detailed user information etc. remains reserved for the Unified Audit trail in the database.

Pluggable Database with Unified Audit and SYSLOG configuration

I have mentioned SYSLOG and RSYSLOG alternately in the blog post. The Oracle feature is basically for SYSLOG and all services based on SYSLOG. Whereas on my OCI Compute instance, where I did my tests, RSYSLOG is configured and used.

References

Easy replacement of tnsnames.ora with LDAP Directory Server

The tnsnames.ora is a configuration file for Oracle database respectively Oracle Net Service Names resolution. It contains network service names that are mapped to connection descriptors for the local naming method. With the help of tnsnames.ora Oracle clients respectively the users can easily access Oracle databases. The connection descriptors provides all relevant information like host, port, service name etc.

For larger environments with multiple Oracle databases and hundreds or more clients, managing and distributing the tnsnames.ora becomes difficult. Especially when Oracle DataGuard or Oracle Real Application Cluster are added, where more complex connection description with failover or load balancing information is needed. A reliable Oracle Network Service and an up-to-date tnsnames.ora are crucial for a highly available access to Oracle databases. A manual copy of tnsnames.ora or a central NFS or Windows share usually does not meet this requirement.

The Solution Approach

The idea of using an LDAP directory to manage Oracle Network Service Names is not new. There are several official and unofficial approaches how this can be implemented:

  • Use of an Oracle Directory for the administration of Oracle Net Service Names, e.g. Oracle Internet Directory (OID) or Oracle Unified Directory (OUD). Whereby with this solution a corresponding Oracle Directory Services Plus license is required. In addition, OID is anything but lean. Ok the license situation has slightly changed see Free Oracle Unified Directory for Oracle Net Services
  • Setup of Oracle Enterprise User Security (EUS). Here, too, an Oracle Directory, i.e. OID or OUD with a corresponding license, is required. With this solution, the authentication and authorisation of the databases is also solved centrally. The setup of EUS is rather complex as it is not only to setup an LDAP server. You also have to define and implement an appropriate user and role concept. In any case, this has an impact on existing applications and use cases.
  • Oracle Network Service Names can be entered directly in MS Active Directory. The database names are then resolved via AD. However, this method requires a schema extension in MS Active Directory. This is usually not so easy to implement in larger environments.
  • Alternative LDAP directory servers like OpenLDAP or 389 Directory Server can be used. Also here a schema extension is needed. Since the LDAP servers are only used for the Oracle Net Service Names resolution this is not critical. Especially since a LDAP schema extension is standard procedure. The advantage of this method is that by using an OpenSource LDAP server the costs remain manageable, although it is not officially supported.

The following figure shows a schematic diagram of the Oracle Network Service Names resolution using an open source LDAP directory as an example.

Oracle Network Service Names Resolution

For the solution presented here, we use the open source LDAP server 389 Directory Server. This is available as an open source variant via Fedora and is also part of RedHat Enterprise Linux as RedHat Directory Server (RHDS). Whereby a corresponding subscription is necessary in order to use the RedHat Directory Server (RHDS). However, the 389 Directory Server from the Fedora project also works perfectly under Oracle Enterprise Linux 8.

The solution presented here is based on the 389 Directory Server from the Fedora project. However, the installation and configuration steps can be applied more or less 1:1 to the RedHat Directory Server (RHDS) as well. Especially since the documentation from the 389 Directory Server is usually any way based on the RedHat documentation or at least references it.

Prerequisites and Requirements

The prerequisites are straight forward. The 389 Directory Server or RedHat Directory Server (RHDS) is modest in terms of system resources. Especially as LDAP server for a few 100 Oracle Net Service Names. The documentation Red Hat Directory Server 11 Release Notes does not show any specific hardware requirements. It is recommended to provide at least 2 CPU cores plus 16GB memory for productive environments. For a simple test setup also less is enough. The base operating system is OEL 8.5 or REL 8.5.

Preparation

First of all we have to make sure, that the Fedora EPEL repository is added and the 389-ds Module is enabled.

sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo yum -y module enable 389-ds

It is also a good idea to open a couple of local firewall ports for LDAP and LDAPS

sudo firewall-cmd --list-all --permanent --zone=public
sudo firewall-cmd --permanent --add-service=ldap --zone=public
sudo firewall-cmd --permanent --add-service=ldaps --zone=public
sudo firewall-cmd --permanent --add-port=9090/tcp --zone=public
sudo firewall-cmd --reload
sudo firewall-cmd --list-all --permanent --zone=public

Optional we pre create the dirsrv group and assign the OS user oracle to be a member of this group. This allows certain administrative activities to be performed as user oracle.

sudo groupadd --gid 520 dirsrv
cat /etc/group
sudo usermod -a -G dirsrv oracle

Install 389 Directory Server

The installation is quite simple. Since we added the Fedora EPEL repository, we only need to install the appropriate packages and dependencies for 389 Directory Server with yum.

sudo yum install 389-ds-base sscg

Configure 389 Directory Server for Oracle Net Service

Once the packages are installed, we can create a Directory Server instance. The easiest way to do this is to use a template. The template can be created directly with dscreate.

sudo dscreate create-template /tmp/oraNet.inf

The template must then be adjusted accordingly. In particular, the following values must be set:

  • full_machine_name the full qualified hostname of the LDAP directory server.
  • instance_name name of the LDAP directory server instance.
  • root_password password for the directory server root user.
  • suffix used as base DN for the directory information tree (DIT) of the directory server

Enclosed is an example to setup an instance named oraNet on ldap1.trivadislabs.com with a base DN dc=trivadislabs,dc=com:

[general]
full_machine_name = ldap1.trivadislabs.com
start = True

[slapd]
instance_name = oraNet
port = 389
root_password = Welcome1
secure_port = 636
self_sign_cert = True
self_sign_cert_valid_months = 24

[backend-userroot]
create_suffix_entry = True
suffix = dc=trivadislabs,dc=com

The directory server instance is then created as root with the command dscreate.

sudo dscreate from-file /tmp/oraNet.inf

Check if the instance is running. dscreate does not only create the instance. It also configure the corresponding start/stop scripts.

sudo dsctl --list
sudo systemctl status dirsrv@oraNet.service

So that we can later also create corresponding Oracle Net Service Names objects in the Directory Server, the schema must be extended. For this we copy the file 90orclNet.ldif into the corresponding instance directory and restart the instance.

curl -Lf https://gist.githubusercontent.com/oehrlis/49767f09c265efc9fc3a74ee16bdfd53/raw/5c71003afe0c38040d317b9a8bc12d3eef113a75/90orclNet.ldif -o /tmp/90orclNet.ldif 
sudo cp /tmp/90orclNet.ldif /etc/dirsrv/slapd-oraNet/schema/90orclNet.ldif
sudo systemctl restart dirsrv@oraNet.service

Check the status of our new Directory Server instanz.

sudo dsctl $(dsctl --list) status
sudo dsctl $(dsctl --list) healthcheck

n the following we will execute several LDAP commands. To avoid having to enter the password interactively each time, we save it temporarily in a local file. This file is passed with the parameter -y to the LDAP command. It makes sense to delete the password file afterwards.

echo "Welcome1" | tr -d '\n' >.oraNetDirectoryManager.pwd
chmod 600 .oraNetDirectoryManager.pwd

The directory server is actually ready now. However, in order to be able to register the corresponding Oracle Net Service Names. We still need to create an Oracle Context. To do this, we simply create an object cn=OracleContext with the class orclContext in the Base DN dc=trivadislabs,dc=com.

ldapadd -h $(hostname -f) -p 389 -x -D "cn=Directory Manager" \
-y .oraNetDirectoryManager.pwd <<-EOI
dn: cn=OracleContext,dc=trivadislabs,dc=com
objectclass: orclContext
cn: OracleContext

EOI

Since the Oracle clients usually execute the resolution of the Oracle Net Service Names with an anonymous LDAP query respectively with an anonymous bind, the ACIs have to be adapted in a way that anonymous searches are allowed in the Oracle Context.

ldapadd -h $(hostname -f) -p 389 -x -D "cn=Directory Manager" \
-y .oraNetDirectoryManager.pwd <<-EOI
dn: dc=trivadislabs,dc=com
changetype: modify
add: aci
aci: (targetattr!="userPassword||authPassword")(version 3.0; acl "Anonymous read access"; allow (read,search,compare) userdn="ldap:///anyone";)

EOI

LDAP Based Oracle Net Services

We now have an empty 389 Directory Server ready to be used for Oracle Net Service Names resolution. It’s about time to add our first entry using ldapadd. In the following example we add a new entry with the name TDB02 and its Oracle Net Description String.

ldapadd -h $(hostname -f) -p 389 -x -D "cn=Directory Manager" \
-y .oraNetDirectoryManager.pwd <<-EOI
dn: cn=TDB02,cn=OracleContext,dc=trivadislabs,dc=com
objectclass: top
objectclass: orclNetService
cn: TDB02
orclNetDescString: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=db19)(PORT=1521))
 (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=TDB02.trivadislabs.com))(UR=A))

EOI

With ldapmodify we can also modify existing entries.

ldapmodify -h $(hostname -f) -p 389 -x -D "cn=Directory Manager" \
-y .oraNetDirectoryManager.pwd <<-EOI
dn: cn=TDB02,cn=OracleContext,dc=trivadislabs,dc=com
changetype: modify
replace: orclNetDescString
orclNetDescString: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=db19)(PORT=1521))
 (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=TDB02.trivadislabs.com))(UR=A))

EOI

Or search for entries using ldapsearch.

ldapsearch -h $(hostname -f) -p 389 -x -LLL -b "dc=trivadislabs,dc=com" -s sub "(&(objectclass=orclNetService)(cn=TDB0*))"

With ldapdelete the entries can be deleted afterwards.

ldapdelete -h $(hostname -f) -p 389 -x -D "cn=Directory Manager" \
-y .oraNetDirectoryManager.pwd \
cn=TDB03,cn=OracleContext,dc=trivadislabs,dc=com

Before we are able to use the LDAP based Oracle Net Service Names we have to configure Oracle Net. To do this, we need to adjust the name resolution order in $TNS_ADMIN/sqlnet.ora with the NAMES.DIRECTORY_PATH parameter and put LDAP first.

NAMES.DIRECTORY_PATH=(LDAP, TNSNAMES, EZCONNECT )

In addition, the file $TNS_ADMIN/ldap.ora must be created. In it, the LDAP server configuration must be specified as follows.

DIRECTORY_SERVERS=(ldap1.trivadislabs.com:389:636)
DEFAULT_ADMIN_CONTEXT="dc=trivadislabs,dc=com"
DIRECTORY_SERVER_TYPE=OID

Finally we can do a tnsping and check if the Oracle Net Service Name is resolved via LDAP or tnsnames.ora

oracle@db19:~/ [rdbms19] tnsping TDB02

TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 01-MAR-2022 06:31:06

Copyright (c) 1997, 2021, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=db19)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=TDB02.trivadislabs.com))(UR=A))
OK (10 msec)

What’s Next?

This blog post describes a simple configuration of the 389 Directory Server for Oracle Net Service. In principle, nothing stands in the way of productive use. However, it is recommended to consider 2-3 points in more detail.

  • User and role concept for the administration and management of data in LDAP.
  • Toolset for administration e.g. scripts, LDAP browser etc.
  • High available configuration of the 389 Directory Server e.g. multiple LDAP servers with an appropriate replication configuration.
  • Certificates suitable for production.
  • Development and implementation of an operating and security concept. This includes backup and restore tasks, among others.
  • License and subscription clarification, especially when using the RedHat Directory Server instead of the pure 389 Directory Server.

Conclusion

389 Directory Server, just as with OpenLDAP, it is relatively easy to create a central directory for the Oracle Net Service Names or tnsnames.ora. Within a few minutes you have built a stand-alone LDAP server. With a little more effort, configuring secure SSL certificates, extended directory information trees with different suffixes, as well as replication, etc. is also easily possible.

Based on this approach, we have already been able to successfully set up highly available LDAP directory servers with multiple suffixes respectively Oracle Network Service Domain Names within the scope of customer projects. Thus, nothing stands in the way of replacing the cumbersome manual administration of the tnsnames.ora files with a central directory. Especially if only the Oracle Network Service Names are in focus. If, in addition, the authentication and authorisation of the databases is to be set up centrally, there is no way around Oracle Enterprise User Security (EUS) or Oracle Centrally Managed Users (CMU). The advantage of the solution presented here is that it can be combined with Oracle Centrally Managed Users (CMU). I.e. authentication and authorisation is done with CMU via Active Directory Integration, while name resolution is based on the LDAP directory. All with manageable effort and without additional licensing costs. In contrast, a solution with Oracle Enterprise User Security (EUS) is somewhat more flexible, but also more complex and cost-intensive.

Links and References

The following links are useful in the context of this blog post, tnsnames.ora, Oracle Network Services and LDAP Directory Server.

Free Oracle Unified Directory for Oracle Net Services

The tnsnames.ora is a configuration file for Oracle database name resolution. It contains network service names that are mapped to connection descriptors for the local naming method. With the help of tnsnames.ora Oracle clients respectively the users can easily access Oracle databases. The connection descriptors provides all relevant information like host, Port, service name etc.

For larger environments with multiple Oracle databases and hundreds or more clients, managing and distributing the tnsnames.ora becomes cumbersome. The management of Oracle Net Service Names can be done with the following Oracle Solutions and Tools:

  • Manual management of Oracle Net Service Names in one or more tnsnames.ora files. e.g. with a version control system, NFS share etc.
  • Oracle Network Service Names can be entered directly in MS Active Directory. The database names are then resolved via AD. However, this method requires a schema extension in MS Active Directory. This is usually not so easy to implement in larger environments.
  • Use of an Oracle Internet Directory (OID) for the directory based administration of Oracle Net Service Names. But OID is anything but lean.
Oracle Net Service Names
Oracle Network Service Names Resolution

It is also a good idea to directly implement Oracle Enterprise User Security based on Oracle Internet Directory or Oracle Unified Directory. Whereby with this solution a corresponding Oracle Directory Services Plus license is required. In addition, with Oracle Enterprise User Security, authentication and authorisation are also set up centrally. In addition to the license costs, there is also the increased implementation and operating costs. For the central administration of the Oracle Net Service Names a bit much effort. Especially if you want to use Oracle Centrally Managed Users instead of Oracle Enterprise User Security. As a simple alternative for the directory-based Oracle Net Service Names resolution, an open source LDAP directory service can of course always be used. A proven solution, just not officially supported by Oracle.

Since a few days Oracle has fulfilled my long awaited Christmas wish and adjusted the Restricted Use License for OUD and OID 😎🥳. The changes are available immediately for all current Oracle versions. I.e. Oracle 12.2, 18c, 19c and 21c. For the older database versions, e.g. 12.1 and 11.2 the restricted use licenses have not been adjusted.

Restricted Use Licenses for Directory Naming

This now allows the use of Oracle Unified Directory to build an LDAP directory for Oracle Net Service Names name resolution for any Oracle Edition except Oracle Database Express Edition. With the help of my scripts on GitHub oehrlis/oudbase you can build an OUD directory within a few minutes. I will give an example of this in a later blogpost.

Conclusion

Nothing stands in the way of setting up a simple LDAP directory service, based on Oracle Unified Directory, for Oracle Net Services name resolution. This is especially good news for larger environments and Oracle Centrally Managed Users deployments, where until now the only option was to manage the Oracle Net Service Names manually or to use an OpenSource LDAP directory service.

Links to the latest Oracle® Database Database Licensing Information User Manuals:

Have fun setting up your Oracle Unified Directory based Oracle Net Service Names server. Stay tuned for a couple of technical information and how-to’s ….

DOAG Oracle Database Vault

This morning I had the opportunity to give a presentation on Oracle Database Vault at the DOAG conference.

Abstract

Oracle Database Vault has been on the market for a few years now. The product has been constantly improved over the years. But where is it worthwhile to use it? Which security measures can be implemented with it? And from whom does DB Vault protect me at all? In this presentation, the technical possibilities of Database Vault 19c / 21c will be explained in addition to the experiences from two customer projects. We will try to show where the use of Database Vault is worthwhile under certain circumstances and under which conditions it is not. This also includes whether protection against snakes and thieves is ensured.

PS: I asked my children what kind of presentation I should submit. The answers were snakes, thieves and cheetahs…

Sketch of a classified Oracle Database with Database Vault and Transparent Data Encryption (TDE)

Questions

Question: Is Oracle Database Vault a separate product?

Answer: Yes it is an option for Oracle Database Enterprise Edition. Beside the Oracle Database Vault Option it is required to have a valid Oracle Database Enterprise Edition license.

Question: What is a REALM

Answer: A REALM is a grouping of database schemas, database objects, and database roles that must be secured for a given application. A REALM is some kind of a security zone for DB objects. User who are owner or participant of the REALM can also access the objects within a REALM. RULE and RULE SET are used to authorise user for REALMS. See also About Realms in Oracle® Database Vault Administrator’s Guide 21c.

Sketch of a Database Vault configuration with three REALMS

If you have any further questions, don’t hesitate to ask them via comment on this blog post. Alternatively, you can also contact me directly.

Slides to my Lecture

Links and References

Notes on Oracle Password Security

This morning I had the great opportunity to participate in the virtual event AUSOUG Connect 2021 with my lecture Security Best Practice: Oracle passwords, but secure!. For me it was a premiere and a pleasure to be part of an Oracle event in Australia.

Oracle Password Security is a small but central topic in database security. Database security and especially passwords have been on my mind for a while. Over the time, one or the other example on that subject accumulates. Therefore it is not always easy to concentrate on the essential points. And as so often in today’s lecture, there was not enough time to go into all possible examples and demos. For this reason, I try to briefly pick up on one or the other point in this blog post.

Demo Environment

A simple Docker container with Oracle Database 21.4.0.0 is used for the Lab environment. The following sketch shows the schematic structure of the environment. The scripts for this environment and the following demos are available in the GitHub repository oehrlis/orapwd. The Docker container is based on oehrlis/docker. However, the scripts can also be used in another Oracle database container or Oracle database.

Logon Process

The login process into the Oracle database can be performed most easily with a network analysis e.g. with Wireshark. This way you can see exactly which TCP packets are sent from or to the DB server. The network traffic is collected either directly with Wireshark or with the help of tcpdump on the DB server. Here is an example of how to use tcpdump. This requires that tcpdump is installed in advanced. Command has to be run as root.

bash-4.2# mkdir -p /u01/config/tcpdump
bash-4.2# tcpdump -i eth0 -s 65535 -w /u01/config/tcpdump/tcpdump_$(date "+%Y%m%d_%H%M").dmp
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes

Then you can connect via SQL*Plus and run test queries. After finishing the tests you can stop tcpdump and analyse the created dump file with Wireshark. After loading the tcpdump file, select a TNS packet for analysis and select Fallow TCP Stream. In the new window you can see the information that is exchanged between the database server and client, as you can see in the following picture.

Example output of a TNS network analysis with Wireshark

With unencrypted SQL*Net connections you can see very nicely the connection establishment, session key exchange and subsequently also the SQL statements which are sent from the client to the server. With an ALTER USER scott IDENTIFIED BY tiger; the statement is only parsed on the server and is therefore not encrypted.

Different Password Verifier

As explained in the lecture, you can use ALLOWED_LOGON_VERSION_SERVER or ALLOWED_LOGON_VERSION_CLIENT in sqlnet.ora to specify which version of the authentication protocol and thus which password verifier should be used. See also Oracle® Database Database Net Services Reference 21c or Oracle® Database Security Guide 21c.

Lets set ALLOWED_LOGON_VERSION_SERVER to 11 in sqlnet.ora either using vi or just append it to the end of sqlnet.ora with echo.

echo "SQLNET.ALLOWED_LOGON_VERSION_SERVER=11" >> $TNS_ADMIN/sqlnet.ora

Now create a configure a couple of users using sqlplus. Preferably one user per password verifier. Whereby we set the passwords explicitly with IDENTIFIED BY VALUES.

ALTER SESSION SET CONTAINER=pdb1;
CREATE USER test_10g IDENTIFIED BY VALUES 'AF310E4D20D06950';
CREATE USER test_11g IDENTIFIED BY VALUES
'S:6702B83E88D277BFC378AD6B22DD1AE01895A254470F8124A9D3C5347056';
CREATE USER test_12c IDENTIFIED BY VALUES
T:45738A7B75C9E31ED0C533BCF4931084658A143FD7CF826B980A88EA6C4F0BE66C28DA7085BCAE386723029BA967DC4F45E9C146F6FA7C22E44BA2C1BD2F56F8C22291D417E26D4B810003F3F055EDFF';
CREATE USER test_all IDENTIFIED BY Welcome1;

Don’t forget to grant some privileges..

GRANT CREATE SESSION TO test_10g;
GRANT CREATE SESSION TO test_11g;
GRANT CREATE SESSION TO test_12c;
GRANT CREATE SESSION TO test_all;

GRANT SELECT_CATALOG_ROLE TO test_10g;
GRANT SELECT_CATALOG_ROLE TO test_11g;
GRANT SELECT_CATALOG_ROLE TO test_12c;
GRANT SELECT_CATALOG_ROLE TO test_all;

Now lets see what we have in SYS.USER$

SET LINESIZE 160 PAGESIZE 200
COL name FOR a10
COL password FOR a16
COL spare4 FOR a20
SELECT name,password,spare4 FROM user$ WHERE name LIKE 'TEST_%' ORDER BY 1;

NAME	   PASSWORD	    SPARE4
---------- ---------------- ----------------------------------------
TEST_10G   AF310E4D20D06950
TEST_11G		    S:6702B83E88D277BFC378AD6B22DD1AE01895A2
			    54470F8124A9D3C5347056

TEST_12C		    T:45738A7B75C9E31ED0C533BCF4931084658A14
			    3FD7CF826B980A88EA6C4F0BE66C28DA7085BCAE
			    386723029BA967DC4F45E9C146F6FA7C22E44BA2
			    C1BD2F56F8C22291D417E26D4B810003F3F055ED
			    FF

TEST_ALL   4932A1B4C59EC3D0 S:FA89B6A242F2E80B1F45E2A7861D9CF49F51ED
			    34B4D2FABAA319561AEEE4;T:E66E6EEDA917E09
			    28E40C05E5B5E8D34D20AA1FDB1F108E4EE8DCE2
			    31EE59BD17BBEB58F83DD01713911A96E817BE8D
			    F28584350991611EF366CC2AEE9CBBAB668D69C8
			    03C92639BC3853F527A1B8DB3

In DBA_USERS we do see the password verifier version

SET LINESIZE 160 PAGESIZE 200
COL username FOR a10
COL password_versions FOR a20
SELECT username, password_versions FROM dba_users WHERE username LIKE 'TEST_%';
USERNAME   PASSWORD_VERSIONS
---------- --------------------
TEST_10G   10G
TEST_11G   11G
TEST_ALL   10G 11G 12C
TEST_12C   12C

As you can see Oracle did create all 3 password hashes for the user TEST_ALL as ALLOWED_LOGON_VERSION_SERVER is set to 11. We now change it to 12 and see, that the user TEST_10G can no longer connect.

host sed -i 's/SQLNET\.ALLOWED_LOGON_VERSION_SERVER.*/SQLNET\.ALLOWED_LOGON_VERSION_SERVER=12/' $cdn/admin/sqlnet.ora
SQL> connect test_10g/Welcome1@pdb1
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE

Change ALLOWED_LOGON_VERSION_SERVER to 12a will then cause, that only the user TEST_12C and TEST_ALL can log in.

host sed -i 's/SQLNET\.ALLOWED_LOGON_VERSION_SERVER.*/SQLNET\.ALLOWED_LOGON_VERSION_SERVER=12a/' $cdn/admin/sqlnet.ora
SQL> connect test_11g/Welcome1@pdb1
ERROR:
ORA-01017: invalid username/password; logon denied

SQL> connect test_12c/Welcome1@pdb1
Connected.
SQL> connect test_all/Welcome1@pdb1
Connected.

When now changing the password for user TEST_ALL, Oracle will only create the password hash for 12c.

SET LINESIZE 160 PAGESIZE 200
COL username FOR a10
COL password_versions FOR a20
SELECT username, password_versions FROM dba_users WHERE username LIKE 'TEST_%';

USERNAME   PASSWORD_VERSIONS
---------- --------------------
TEST_10G   10G
TEST_11G   11G
TEST_ALL   12C
TEST_12C   12C

Check Passwords

To check the default Oracle password we can query dba_users_with_defpwd. Here we see that in PDB1 the user SCOTT still has a default password.

ALTER SESSION SET CONTAINER=pdb1;

Session altered.

SELECT username FROM dba_users_with_defpwd;

USERNAME
-----------
SCOTT

For Oracle 10g respectively passwords stored in SYS.USER$.PASSWORD column we can also verify the password hashes using the script verify_alluser_passwords.sql. This script just calculates the a few passwords based on an embedded dictionary and compares the hashes. You see that the user TEST_10G has a week password.

SQL> @verify_alluser_passwords.sql
User			      Status	Password
----------------------------- -----------------------------
SYS			      0 	OK
AUDSYS			      8 	OK
SYSTEM			      0 	OK
OUTLN			      8 	OK
...
SCOTT			      16	OK
TVD_HR			      0 	OK
TVD_HR_SEC		      0 	OK
TEST_10G		      0 	WELCOME1
TEST_11G		      0 	OK
TEST_12C		      0 	OK
TEST_ALL		      0 	OK

Let’s create manually a Oracle 10g password hash using create_password_hash.sql. The script does use DBMS_CRYPTO to manually create a hash value.

SQL> @create_password_hash.sql system ieShae0
Username : system
Password : ieShae0
Hash	 : 0AD56CF5F1CB8D2A
SQL	 : alter user system identified by values '0AD56CF5F1CB8D2A';

When now can create a hash file for the hashcat tool to run a brute force attack on the hash.

echo "0AD56CF5F1CB8D2A:SYSTEM" > demo.hash

hashcat as a couple of options and parameters. Please see https://hashcat.net/hashcat/ for more detailed information. For now we just use the following parameters:

  • --increment will start to brute force with shorter length e.g 4 characters
  • --custom-charset1 to define numbers and characters
  • --hash-type Oracle 7+ respectively password verifier 10g
  • --show show the password

Let’s start a hashcat session

hashcat --attack-mode 3 --increment --increment-min 4 --custom-charset1 ?l?d --hash-type 3100 ./demo.hash ?1?1?1?1?1?1?1

This session is not using any dictionary or rule base attack. Is is testing all password combination according the formate defined above. This can take a moment. To speed this up you’ll would have to use a dictionary, rule based or combined attack.

hashcat (v6.1.1) starting...

/usr/local/Cellar/hashcat/6.1.1/share/hashcat/OpenCL/m03100_a3-optimized.cl: Pure kernel not found, falling back to optimized kernel
OpenCL API (OpenCL 1.2 (Sep  5 2021 22:39:07)) - Platform #1 [Apple]
====================================================================
* Device #1: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz, skipped
* Device #2: Intel(R) UHD Graphics 630, 1472/1536 MB (384 MB allocatable), 24MCU
* Device #3: AMD Radeon Pro 5500M Compute Engine, 8112/8176 MB (2044 MB allocatable), 24MCU

/usr/local/Cellar/hashcat/6.1.1/share/hashcat/OpenCL/m03100_a3-optimized.cl: Pure kernel not found, falling back to optimized kernel
Minimum password length supported by kernel: 0
Maximum password length supported by kernel: 30

Hashes: 1 digests; 1 unique digests, 1 unique salts
Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes, 5/13 rotates

Applicable optimizers applied:
* Optimized-Kernel
* Zero-Byte
* Not-Iterated
* Single-Hash
* Single-Salt
* Brute-Force

Watchdog: Hardware monitoring interface not found on your system.
Watchdog: Temperature abort trigger disabled.

Host memory required for this attack: 602 MB
Session..........: hashcat
Status...........: Cracked
Hash.Name........: Oracle H: Type (Oracle 7+)
Hash.Target......: 0AD56CF5F1CB8D2A:SYSTEM
Time.Started.....: Thu Nov 11 13:52:50 2021 (1 min, 2 secs)
Time.Estimated...: Thu Nov 11 13:53:52 2021 (0 secs)
Guess.Mask.......: ?1?1?1?1?1?1?1 [7]
Guess.Charset....: -1 ?l?d, -2 Undefined, -3 Undefined, -4 Undefined 
Guess.Queue......: 4/4 (100.00%)
Speed.#2.........:  4586.9 kH/s (9.83ms) @ Accel:8 Loops:32 Thr:8 Vec:1
Speed.#3.........:   365.5 MH/s (7.82ms) @ Accel:64 Loops:32 Thr:64 Vec:1
Speed.#*.........:   370.1 MH/s
Recovered........: 1/1 (100.00%) Digests
Progress.........: 23809572864/78364164096 (30.38%)
Rejected.........: 0/23809572864 (0.00%)
Restore.Point....: 399360/1679616 (23.78%)
Restore.Sub.#2...: Salt:0 Amplifier:3552-3584 Iteration:0-32
Restore.Sub.#3...: Salt:0 Amplifier:5920-5952 Iteration:0-32
Candidates.#2....: NPRH5I1 -> TFCIN8M
Candidates.#3....: LDD45HR -> SBRYHG0

Started: Thu Nov 11 13:52:44 2021
Stopped: Thu Nov 11 13:53:53 2021

The result can also be grabbed by using the --show command. By the way, the brute force attack on this hash just took about 1min on my MacBookPro.

hashcat --hash-type 3100 ./demo.hash --show

Verify the performance of your environment by running a benchmark on Oracle 7+ hash

hashcat --benchmark --hash-type 3100
hashcat (v6.1.1) starting in benchmark mode...

Benchmarking uses hand-optimized kernel code by default.
You can use it in your cracking session by setting the -O option.
Note: Using optimized kernel code limits the maximum supported password length.
To disable the optimized kernel code in benchmark mode, use the -w option.

OpenCL API (OpenCL 1.2 (Sep  5 2021 22:39:07)) - Platform #1 [Apple]
====================================================================
* Device #1: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz, skipped
* Device #2: Intel(R) UHD Graphics 630, 1472/1536 MB (384 MB allocatable), 24MCU
* Device #3: AMD Radeon Pro 5500M Compute Engine, 8112/8176 MB (2044 MB allocatable), 24MCU

Benchmark relevant options:
===========================
* --optimized-kernel-enable

Hashmode: 3100 - Oracle H: Type (Oracle 7+)

Speed.#2.........:  4946.0 kH/s (78.31ms) @ Accel:128 Loops:16 Thr:8 Vec:1
Speed.#3.........:   349.2 MH/s (71.25ms) @ Accel:32 Loops:512 Thr:64 Vec:1
Speed.#*.........:   354.2 MH/s

Started: Thu Nov 11 13:57:56 2021
Stopped: Thu Nov 11 13:58:02 2021

As you see hashcat, john the ripper etc are powerful but also dangerous tools when it comes to password engineering 😊

Recommendations

  • Keep your Oracle Clients and Server up to date
  • Consider using strong Authentication
    • Kerberos and SSL based Authentication
  • Don’t use legacy password verifier
    • Use Oracle password file version 12.2
    • Explicitly configure ALLOWED_LOGON_VERSION_SERVER to 12a and exclusively use 12c hash values
  • Revise your password policies
  • User awareness training
  • Reduce the attack vector
    • Limit access to password hash values
    • Know where you have password hash values
    • Start using NOAUTHENTICATION for schema owners where no login is required.
  • Implement general database hardening

Slides to my Lecture

AUSOUG virtual Connect 2021 – Oracle passwords, but secure! by Stefan Oehrli

Disclaimer

The use of methods and tools to verify password hashes are not allowed everywhere. In particular, their use on productive environments is explicitly not recommended. Please check if the tools are allowed in the respective environment, company, country, etc. before using them. The author disclaims any liability.

A simple Container based Guacamole Setup

Another food blog post? No, of course not. Guacamole in this case does not refer to the avocado-based dip, but to the Apache client less remote desktop gateway. We use this as an alternative access to the OCI based Trivadis LAB environment. We use an automatically configured Guacamole environment with Nginx reverse proxy and Let’s encrypt SSL certificates. The detailed description of this Guacamole environment is the goal of this blog post.

For the Trivadis LAB and training environment in Oracle Cloud (OCI), we typically use a bastion host to access the various resources. In training courses, it is not always possible for all participants to use SSH clients such as Putty, MobaXterm, SSH command line and so on. This is especially true when training courses are increasingly conducted virtually, as is currently the case. Participants often use their company’s IT devices for this purpose. These devices are often protected accordingly and do not allow SSH access to public IP addresses. An HTTP-based remote access gateway such as Guacamole offers a simple alternative to allow participants to access the training environment.

Structure

Apache itself already provides Guacamole as a container. A simple setup contains two containers. The guacamole container for the remote desktop gateway and the guacd container for the server-side proxy. Guacamole uses a dedicated port and is not specially protected. For the training environment, however, access should be possible via HTTPS. The use of the default ports is also desirable. Guacamole uses Tomcat with a dedicated port and is not specially protected by default. For the training environment, however, access should be possible via HTTPS. The use of the default ports is also desirable. To meet the requirements while using existing Docker images, Guacamole is run behind an Nginx reverse proxy. The SSL certificates are generated using Let’s encrypt. The following graphic shows the schematic structure of the environment.

Guacamole Setup with Nginx Reverse Proxy

A few notes on the containers:

  • nginx configured as reverse proxy. By default this container does bind to HTTP and HTTPS to allow client access.
  • certbot simple container to create the Let’s Encrypt certificates.
  • guacamole the Apache Guacamole container
  • guacd the server-side proxy server for Guacamole
  • mysql database used to store the Guacamole connection information

Quickstart

Make sure you have docker and docker-compose installed on your environment. If not yet done you can get more information in chapter Install Docker Compose in the Docker documentation.

git clone https://github.com/oehrlis/guacamole.git
  • Review and update the .env and docker-compose.yml file. Make sure to define at least the following environment variable either with export , in the .env file or directly change the values in the compose file:
    • EMAIL – Adding a valid address for certificate renewal (default none)
    • HOSTNAME – Define a hostname for the nginx server and certificate name (default: $(hostname))
    • DOMAINNAME – Define a domain name for nginx server and certificate name (default: trivadislabs.com)
    • STAGING_ENABLE – Set STAGING to 1 if you’re testing your setup to avoid hitting request limits is certification
  • Pull the required Docker images
docker pull guacamole/guacamole
docker pull guacamole/guacd
docker pull mysql/mysql-server
docker pull nginx
docker pull certbot/certbot
  • Prepare the MySQL database
cd guacamole
./bin/prepare_initdb.sh
  • Start guacamole containers
docker-compose up -d guacamole mysql guacd
  • Perform the certificate challenge. Where Nginx is first started with a self signed certificate in order to then execute the actual certificate request. Define the variables If not yet updated in the .env file
export HOSTNAME="guacamole"
export DOMAINNAME="example.org"
export EMAIL="info@example.org"
export STAGING_ENABLE=1
./bin/prepare_certs.sh
  • Start all containers
docker-compose up -d

The different passwords for the Guacamole admin, Guacamole database and MySQL root are generated when not explicitly specified. You can find them in the logs respectively in the .env file which is updated by the prepare_initdb.sh script.

As soon as your container are up and running you can access your guacamole remote desktop gateway

Login Dialog Guacamole
Example Guacamole Home Page

Automatisation

There is even a quicker way to setup the Guacamole Docker stack by using the script setup_guacamole.sh. In particular, this method is used to set up the guacamole stack in an OCI bootstrap process. By default it will use the OS user avocado (what else 😉). But the user can be customised by the variable GUACAMOLE_USER.

git clone https://github.com/oehrlis/guacamole.git
cd guacamole
export HOSTNAME="guacamole"
export DOMAINNAME="example.org"
export EMAIL="info@example.org"
export STAGING_ENABLE=1
./bin/setup_guacamole.sh

Customised Configuration

As you could see above, the installation can be customised with a few variables. These include the following variables in particular.

VariableDefault ValueDescription
HOSTNAMEhostnameHostname of the bastion host used to create the certificate request. i.h. this name must be resolvable via DNS
DOMAINNAMEtrivadislabs.comDomain name user for the certificate request
EMAILadmin@DOMAINNAMEA valid e-Mail address used for the certificate challange
GUACAMOLE_USERavocadoGuacamole OS User
GUACAMOLE_BASE/home/${GUACAMOLE_USER}/guacamoleGuacamole base folder
GUACADMIN_USERguacadminGuacamole admin user
GUACADMIN_PASSWORDn/aGuacamole admin password. Will be generated and stored in .env
MYSQL_PASSWORDn/aMySQL database password. Will be generated and stored in .env
Environment Variables

Additionally, it is possible to create the guacamole connections directly when creating the configuration. For this purpose, the SQL script 02_connections.sql in the config/mysql directory has to be adapted. For example, with an SSH connection.

INSERT INTO guacamole_connection (connection_name, protocol) 
    VALUES ('Database Server (db - 10.0.1.6)', 'ssh');
INSERT INTO guacamole_connection_parameter VALUES (2, 'hostname', '10.0.1.6');
INSERT INTO guacamole_connection_parameter VALUES (2, 'port', '22');
INSERT INTO guacamole_connection_parameter VALUES (2, 'username', 'oracle');

Of course you can also enter passwords, SSH keys etc. directly. All SQL scripts in this directory are executed when the MySQL DB is created. So further customisations are possible. More information about the configuration of Guacamole can be found in the documentation.

OCI Bastion Host

How about using this on an OCI bastion host? Nothing simpler than that. You just have to configure you VCN and bastion host in the following manner.

  • Allow incoming traffic for port 80 and 443
  • Install docker on the bastion host
  • Register the public IP of your bastion host in a DNS zone create in OCI
  • Deploy the Guacamole stack manually as explained above.

In one of my next blog posts, I will show you how to automatically deploy your bastion host with guacamole stack by using my Terraform modules module tvdlab-bastion and tvdlab-base.

Conclusion

The container-based setup for Apache Guacamole is relatively simple and adds value to accessing the Trivadis cloud-based training and lab environment. The approach described here has also been successfully used for various test and PoC environments. To make setup in OCI even easier, Terraform module tvdlab-bastion and tvdlab-base use this approach to deploy the Guacamole stack on the Bastion host directly in the bootstrap process. But more about that in an other blog post.

References

Home office, starving and the favourite takeaway is far away

It happens that the children are at school and the wife is at work in the hospital. For lunch a light snack from the takeaway around the corner would be perfect. No problem at work in the city, but when working from home? The offer in my area is relatively modest. Hey but why not just make a quick Piadina for lunch! For once something else than OCI, DB Security or other database stuff. Not sure what a Piadina is?

Piadina or Piada is a thin Italian flatbread, typically prepared in the Romagna historical region (Forlì, Cesena, Ravenna and Rimini). It is usually made with white flour, lard or olive oil, salt and water.

Wikipedia. 2020. “Piadina.” Last modified 23. October 2020. https://en.wikipedia.org/wiki/Piadina

The dough

Unfortunately, I can not fall back on an old family recipe. Therefore, simply a variant of the dough, which has always worked well for me. In principle, the ingredients are relatively simple and available in almost every household. One takes (or borrow from the neighbour):

  • 200g white flour
  • 1 tsp salt
  • 1 tbsp olive oil
  • 1 dl sparkling mineral water

Mix everything together into a smooth dough and let it rest ideally for 20-30 minutes. If you are starving, you can skip the 30 minutes. There is no block corruption or anything…

The roasting / baking

Then divide the dough into 4 equal parts and roll each out into a thin patty. To do this, take a rolling pin. If you don’t have one, you can use an empty and clean beer, wine or whiskey bottle from the night before 🤪. You can then briefly bake the patty in a pan without adding oil.

The filling

As a filling you can take what every you find in you fridge and fits to a Piadina. Usually you take some cheese (Parmigiano, Pecorino,…) italian raw ham, arugula, salami, dried tomatos etc…

My Piadina today looks like this

Conclusion

A Piadina always goes well. It also doesn’t take much more time to prepare than deploying an Oracle 21c DB into OCI. After enjoying 1-2 Piadina, you are strengthened for a productive and successful afternoon. Enjoy

How to get an Oracle 21c Database on the Oracle Cloud

A few hours ago Oracle published a blog post about the new version Oracle 21c. See Introducing Oracle Database 21c. It is again an innovation release with a couple of interesting new features and enhancements. The online Oracle Documentation library does provide a few information on this enhancements:

The study of documentation and blog posts is always interesting. But it gets much more exciting when you can do first hands-on with the new release. Thanks to the Oracle Cloud this is easily possible. With a few clicks in OCI you can create a DB system with Oracle 21c. Then nothing else standing in the way to test new features all night long.

Requirements

Unfortunately it is not possible to get Oracle 21c as Always Free Version. At least I did not find a way to do so. My workmate Philipp Salvisberg pointed out to me that it is indeed possible to create an Allways Free 21c ATP DB. You just have to be in the right OCI region. Any way, since I do like to test a few infrastructure feature like DB Nest, Kerberos, EUS etc I any way have to setup a DB System rather than an Oracle Autonomous Database. In order to finally get started, a few prerequisites must be met:.

  • An Oracle Account and active OCI Tenant
  • A few OCI credits
  • Compartment where you plan to deploy your DB system
  • A VCN for you DB System.

VCN

Usually create my VCN’s with Terraform. That means I create my VCNs usually with a bastion host, a public and a private network. For this I use my Terraform OCI modules which are available in the Terraform registry. See tvdlab-base, tvdlab-bastion or tvdlab-vcn. A blog post is still on my todo list 😊. For setting up the Oracle 21c DB system I assume that a corresponding VCN is available.

I’ll use the following information for my setup

  • Compartment DB21C
  • VCN db21c00

DB System

To setup the 21c database go to the main page and select Bare Metal, VM, and Exadata in the menu.

Create a DB System by pressing the corresponding blue button. Do not be surprised. On the following picture you can already see a 21c DB system.

Compared to Oracle 20c preview it does not matter if you choose Bare Metal or Virtual Machine. 21c is available for both environments. This also applies to the storage management software. I do select a Virtual Machine with the Logical Volume Manager.

DB System wizard part 1

As I do only use the DB System for simple feature test and engineering I keep the size to the minimum. As Oracle software I choose the Enterprise Edition. Other editions including Standard Edition are also possible.

DB System wizard part 2

Add you public SSH key(s) to be able to access the DB system later on via SSH. Depending on your OS you can use putty or ssh-keygen for this. Since I do work on a Mac I’ll create the pair of keys with ssh-keygen.

Add SSH public keys

Finalise the system configuration by selecting your VCN and a corresponding subnet. In my case it is the VCN db21c00 and the private subnet db21c00 private.

DB System wizard part 3

After pressing next you come to the second page of the DB System wizard, where we configure the DB itself.

DB System wizard part 4

Choose a fancy name for your DB or like me just TDB21c. By the default you see, that the Database Image 19c is selected. You can change it and select 21c.

DB System wizard part 5

Finish the DB System wizard by selecting Create DB System.

DB System wizard part 6

It will take now a while until you Oracle 21c Database is ready to use. A good moment to crab a cup of coffee or write a blog post 😉. I’ll do this now as well and finish this post in a couple of minutes…

DB System Setup

My 21C DB system is finished in the meantime and can be used.

Since I did create the VM in a private subnet, it is not possible to directly access the DB from the internet. I do have to connect using the bastion host. Either via SSH or for SQL Developer using SSH port forwarding.

Enclosed an example to connect to the new DB system via bastion host using SSH proxy command. In this example I do us the following SSH parameters:

  • -A does enables forwarding of the authentication agent connection
  • -J is used to specify the proxy host used to initiate the connection. In my case the bastion host.
  • db21c is the alias hostname for the public IP of my bastion host
  • deadalus is the hostname for my DB system
  • opc is the user used to connect to the different compute instances. you can also connect directly to the oracle user, if you put the public keys in the .ssh/authorized_keys file of user oracle.
soe@gaia:~/ [ic19300] ssh -A -J opc@db21 opc@deadalus
Last login: Wed Dec  9 05:35:20 2020 from 10.0.0.2
[opc@deadalus ~]$ sudo su - oracle
Last login: Wed Dec  9 05:36:22 UTC 2020 from 10.0.0.2 on pts/0
[oracle@deadalus ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Wed Dec 9 05:42:38 2020
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.1.0.0.0

SQL> 

Alternatively, you can do SSH port forwarding for port 1521 and connect directly to the database with SQL Developer.

soe@gaia:~/ [ic19300] ssh -A opc@db21 -L 1521:deadalus:1521
Last login: Wed Dec  9 05:33:00 2020 from yyy.yyy.yyy.yyy
--------------------------------------------------------------------
-
- Welcome to the bastion / jump host for the OCI environment
- hostname          :   db21c
- Public IP         :   xxx.xxx.xxx.xxx
- Guacamole Console :   http://secret
- Guacamole Admin   :   <USER>
- Guacamole Password:   <PASSWORD>

[opc@db21c-bastion00 ~]$ 
SQL Developer connection setting
SQL Developer SQL Worksheet

If you don’t want to deal with command line and SSH commands, then with newer versions of SQL Developer you can set up a connection directly via SSH port forwarding. Select SSH from the View menu and create a new SSH connection. The following details must be added:

  • Name: Name of your SSH connection
  • Host: Public IP of your bastion host
  • Username: Usually just opc
  • Key File: The private key file matching the public key used above
  • Name: Name of you SSH port forwarding rule
  • Host: Private IP address of the DB system
  • Port: The TCP Port configured on the DB system usually just 1521
  • Local Port: Use an automatically assigned local port, then you do not have to bother if it is already in use
New SSH Host Dialog

The SSH host can now be used to configure the DB connection. You only have to select the connection type SSH.

SQL Developer Connection using SSH

Conclusion

In OCI you can create a new engineering system for Oracle 21c relatively quickly. With the Bastion host I have chosen a more complex but secure method. The separation of the subnets into private and public network does offer me the flexibility to test a few Features which requires additional infrastructure components. e.g. Centrally Managed Users CMU or Enterprise User Security EUS, which both require additional services such as Oracle Unified Directory or MS Active Directory. Running everything via public IP is rather a bad idea. Any way, following this approach you now have Oracle 21c Database. I wish you happy engineering.

References