oracle cheat sheet

# groupadd dba
# useradd -g dba oracle
# xauth extract /home/oracle/.Xauthority FQDB/unix:10

Oracle Database Enterprise Edition 12c

Enterprise Manager Express Login

https://FQDN:5500/em
Use the SYS account and its password

alter system set db_securefile=PERMITTED scope=both;

Oracle Express Edition 11g

notes for installing Oracle Database Express Edition 11g Release 2
Remember - oracle still thinks that the swap space needs to be double the size of the RAM available....

# rpm -i oracle-xe-11.2.0-1.0.x86_64.rpm
# /etc/init.d/oracle-xe configuration

finish the installation by starting the Web-GUI

http://<hostname>:8080/apex
http://<hostname>:8080/apex/apex_admin

environment

set the proper environment to find all the tools needed

oracle$ vi .bash_profile
# .bash_profile
ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_HOME
export ORACLE_SID=XE
export PATH

APEX

if the default admin account wont work - reset the password to admin. during the inital logon later you will be force to change it.

oracle$ cd $ORACLE_HOME/apex
oracle$ sqlplus /nolog
SQL> connect / as sysdba
SQL> @apxxepwd admin 

backup and restore

Offline backup (Cold backup)
A cold OS backup is simple: the database has been cleanly shut down (not crashed, not shutdown abort) so that:
- all datafiles are consistent (same SCN) and no redo is needed in case of restore
- the datafiles are closed: they will not be updated during the copy operation
Thus, it can be restored entirely and the database can be opened without the need to recover.

Online backup (Hot backup) - only possbible in ARCHIVELOG mode
An hot backup does the copy while the database is running. That means that the copy is inconsistent and will need redo applied to be usable. Recovery is the process of applying redo log information in order to roll-forward file modifications as they were done in the original files.
begin/end backup - FAQ

listener controls

$ lsnrctl status
$ lsnrctl [start|stop]

SQL Plus

$ sqlplus /nolog
SQL> connect / as sysdba
SQL> SHOW user;
SQL> SELECT * FROM v$version;
archivelog mode - archredo - dba-oracle.com

Complete shutdown/startup

SQL> shutdown immediate
if there are still some processes running # ps ax | grep -i ora
SQL> shutdown abort
SQL> startup
or
SQL> startup nomount
SQL> alter database mount
SQL> alter database open
or
SQL> startup mount
SQL> alter database open

Suspend/Resume
The ALTER SYSTEM SUSPEND statement halts all input and output (I/O) to datafiles (file header and file data) and control files. The suspended state lets you back up a database without I/O interference. When the database is suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a queued state.
The suspend/resume feature is useful in systems that allow you to mirror a disk or file and then split the mirror, providing an alternative backup and restore solution.
Caution:
Do not use the ALTER SYSTEM SUSPEND statement as a substitute for placing a tablespace in hot backup mode. Precede any database suspend operation by an ALTER TABLESPACE BEGIN BACKUP statement.

SQL> ALTER SYSTEM SUSPEND;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS : SUSPENDED
SQL> ALTER SYSTEM RESUME;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS : ACTIVE

Begin/End-Backup

SQL> ALTER DATABASE BEGIN BACKUP 
if you get
ERROR at line 1;
ORA-01123: cannot start online backup; media recovery not enabled
enable the archive log first

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Database mounted.
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> SELECT log_mode FROM v$database;
SQL> SHOW PARAMETER log_archive_start;
or
SQL> archive log list;

When in NOARCHIVELOG mode the database runs normally, but there is no capacity to perform any type of point in time recovery operations or online backups.

SQL> select file#,status from v$backup;
SQL> ALTER DATABASE END BACKUP;
if the DB crashed during backup mode - mount it, manually end backup and open it
SQL> ALTER DATABASE END BACKUP;
SQL> ALTER DATABASE OPEN;

misc

SQL> SELECT name from v$database;
SQL> SELECT name FROM v$datafile;
SQL> SELECT member FROM v$logfile;
SQL> SELECT name FROM v$controlfile;
manual log switch
SQL> ALTER system switch logfile;
System altered.
SQL> EXIT

Oracle Test Data generation

To start with I copied a sample I found in the internet

INSERT INTO MITARBEITER (ID,NAME,VORNAME,GEHALT,ANGESTELLT_SEIT)
SELECT
  LEVEL ID
, initcap(dbms_random.string(U, dbms_random.VALUE(3,12))) NAME
, initcap(dbms_random.string(U, dbms_random.VALUE(5,20))) VORNAME
, trunc(dbms_random.VALUE(1000,5000),2) GEHALT
, TRUNC(SYSDATE - DBMS_RANDOM.value(0,5000)) AS ANGESTELLT_SEIT
FROM dual
CONNECT BY LEVEL <= 1000000;
This will generate 1 Million entries.

Unfortunately - Oracla (at least in 11g) does not provide an AUTOINCREMENT attribute. One has to fake that with a sequence. Fist value is the next free primary key value.

CREATE SEQUENCE id_seq
 START WITH     1000001
 INCREMENT BY   1
 NOCACHE
 NOCYCLE;
And now add 1 million more rows
INSERT INTO MITARBEITER (ID,NAME,VORNAME,GEHALT,ANGESTELLT_SEIT)
SELECT
id_seq.nextval,
name,
vorname,
gehalt,
angestellt_seit
FROM MITARBEITER WHERE ID<=1000000;
Now you can grow as you wish. 1 Million, 2 Million, 4 Million, 8 Million...

Index rebuild

Standard rebuild is using exclusive locks and meant for downtimes with no DML.
Often it may be preferred to do it online - use the online flag.

alter index meinindex rebuild online parallel 2 nologging; 
or
alter index meinindex rebuild parallel 2 nologging; 
sample
ALTER INDEX HAMMER.MITARBEITER_IDNAMEGEHALT REBUILD COMPRESS 2 NOLOGGING;
ALTER INDEX HAMMER.MITARBEITER_IDNAMEGEHALT REBUILD PARALLEL 2 COMPRESS 2 NOLOGGING;

ALTER INDEX HAMMER.MITARBEITER_IDNAMEGEHALT REBUILD PARALLEL 2 NOCOMPRESS NOLOGGING;

dba-oracle.com - nologging Blog - ORA-01450

SGA Size

DO NOT set the sga_max_size lower than the target size. The database wont start up again.

show sga;
show parameter sga_max_size;
alter system set sga_max_size=2G scope=spfile;
startup force

PFiles - SPFiles

pure text files (stripped down spfile)

startup pfile=/oracle/product/11.2.0/db_1/dbs/init.ora;
startup pfile=/oracle/product/11.2.0/db_1/dbs/pfileORA11G.ora;
after startup - create a new spfile
CREATE SPFILE=/oracle/product/11.2.0/db_1/dbs/spfileORA11G.ora FROM PFILE=/oracle/product/11.2.0/db_1/dbs/pfileORA11G.ora;

Open sessions?

SQL> select saddr,username, command, status, schemaname from V$SESSION;

Data Pump - Import/Export

# cd /ora01
# mkdir export
# chown oracle:oinstall export

# su - oracle
$ sqlplus /nolog
SQL> connect / as sysdba

SQL> CREATE DIRECTORY dpump_dir1 AS /ora01/export;
SQL> GRANT READ,WRITE ON DIRECTORY dpump_dir1 TO hammer;
SQL> \d

$ expdp hammer/kennwort DIRECTORY=dpump_dir1 DUMPFILE=hammer_schema.dmp SCHEMAS=hammer
Export: Release 11.2.0.4.0 - Production on Tue May 30 18:49:51 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "HAMMER"."SYS_EXPORT_SCHEMA_01":  hammer/******** DIRECTORY=dpump_dir1 DUMPFILE=hammer_schema.dmp SCHEMAS=hammer
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.765 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HAMMER"."MITARBEITER"                      1.382 GB 33022070 rows
. . exported "HAMMER"."TEST_DATA"                        43.75 MB 1000000 rows
Master table "HAMMER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HAMMER.SYS_EXPORT_SCHEMA_01 is:
  /ora01/export/hammer_schema.dmp
Job "HAMMER"."SYS_EXPORT_SCHEMA_01" successfully completed at Tue May 30 18:50:59 2017 elapsed 0 00:01:05
Mit VERSION=10.1 könnte man ein Dump für eine ältere Version machen.
$ impdp hammer/kennwort DIRECTORY=dpump_dir1 DUMPFILE=hammer_schema.dmp