PDF -Oca Oracle Database 12c Installation And Administration Exam - 1Z0-062_Oracle Database 12cInstallation and Administration
Wait Loading...


PDF :1 PDF :2 PDF :3 PDF :4 PDF :5 PDF :6 PDF :7 PDF :8 PDF :9 PDF :10


Like and share and download

1Z0-062_Oracle Database 12cInstallation and Administration

Oca Oracle Database 12c Installation And Administration Exam

PDF Oca Oracle Database 12c Installation And Administration Exam admin ifj oca oracle database 12c installation and administration exam guide exam 1z0 062 oracle press pdf PDF Oca Oracle Database 12c Installation And Administration Exam thelook almay oca oracle database 12c installation and

Related PDF

Oca Oracle Database 12c Installation And Administration Exam

[PDF] Oca Oracle Database 12c Installation And Administration Exam admin ifj oca oracle database 12c installation and administration exam guide exam 1z0 062 oracle press pdf
PDF

Oca Oracle Database 12c Installation And Administration Exam

[PDF] Oca Oracle Database 12c Installation And Administration Exam thelook almay oca oracle database 12c installation and administration exam guide exam 1z0 062 oracle press pdf
PDF

Oca Oracle Database 12c Installation And Administration Exam

[PDF] Oca Oracle Database 12c Installation And Administration Exam teste cervejafacil oca oracle database 12c installation and administration exam guide exam 1z0 062 oracle press pdf
PDF

Study Guide For 1z0 062 Oracle Database 12c Installation And

[PDF] Study Guide For 1z0 062 Oracle Database 12c Installation And biz purview study guide for 1z0 062 oracle database 12c installation and administration oracle certification prep pdf
PDF

Oca Oracle Database 12c Installation And Administration Exam

[PDF] Oca Oracle Database 12c Installation And Administration Exam v2 cervejafacil oca oracle database 12c installation and administration exam guide exam 1z0 062 oracle press pdf
PDF

Oca Oracle Database 12c Installation And Administration Exam

[PDF] Oca Oracle Database 12c Installation And Administration Exam seo1 sproutworld oca oracle database 12c installation and administration exam exam 1z0 062 oracle press pdf
PDF

Study Guide For 1z0 062 Oracle Database 12c Installation And

[PDF] Study Guide For 1z0 062 Oracle Database 12c Installation And derby ticketyboo it study guide for 1z0 062 oracle database 12c installation and administration oracle certification prep pdf
PDF

Oca Oracle Database 12c Installation And Administration Exam

[PDF] Oca Oracle Database 12c Installation And Administration Exam pixwebapp nextleap io oca oracle database 12c installation and administration exam guide exam 1z0 062 oracle pr
PDF

Oca Oracle Database 12c Installation And Administration Exam

[PDF] Oca Oracle Database 12c Installation And Administration Exam w2dev2 sonicwall oca oracle database 12c installation and administration exam exam 1z0 062 oracle press pdf
PDF

Oca Oracle Database 12c Installation And Administration Exam

77F0D0E5EDACA06ECC42E4195D81F941 Oca Oracle Database 12c Installation And Administration Exam Guide Exam 1z0 062 Oracle Press 1 7
PDF

1Z0-330 Exam Training Material and Exam Dumps

oca oracle database sql certified expert exam guide exam 1z0 047

PDF Oracle 1Z0 330 Braindumps Jira – Atlassian jira atlassian secure attachment 323965 1Z0 330 pdf PDF Oracle 1Z0 330 PDF Questions | Valid Dumps 2019 Jira – Atlassian jira atlassian secure attachment 342460 1Z0 330

1Z0-434 - Oracle SOA Suite 12c Essentials.pdf

1z0 434 Oracle Soa Suite 12c Essentials Service - Book Library

PDF SOA Suite 12c Exam Study Guide PDF Oracle oracle soa suite 12c exam study guide 2398385 pdf PDF Oracle 1Z0 434 Dumps with Valid 1Z0 434 Exam Jira – Atlassian

1z0-481

1Z0-481 - CertBus

PDF Oracle 1Z0 481 Dumps with Valid 1Z0 481 Exam Questions PDF Jira jira atlassian secure 1Z0 481 Exam Dumps 2018 pdf PDF 1z0 481 Exam Questions Certleader pdf certleader 1z0 481 pdf PDF

1z0-497

1Z0-497 - Pass4Lead

PDF Oracle 1Z0 497 Dumps with Valid 1Z0 497 Exam Jira – Atlassian jira atlassian secure 1Z0 497 Exam Dumps 2018 pdf PDF 1Z0 497 Pass4Sure officialcerts pdf bycode 1Z0 497

1Z1 Scja Beta Exam Notes

Download Sun Java Certification Questions And Answers PDF

ytmfurniture sun java certification questions

bib irb hr datoteka 682263 Zbornik41 Badurina pdf LADA BADURINA VREMENSKI ODNOSI NA RAZINI SLOŽENE REČENICE I TEKSTA 77 II Vrijeme u rečenici Prikaz vremenskih odnosa na razini složene rečenice započet ćemo osvr tom na prototipni način njihova iskazivanja – vremenske zavisnosložene rečenice Odabir polazišne točke

1

Please wait - USCIS

irs gov pub irs pdf fw9 pdf Form W 9 (Rev 10 2018) Page 2 By signing the filled out form, you 1 Certify that the TIN you are giving is correct (or you are waiting for a irs gov

1

Cls Nº Entrant Nat Driver Nat Cat Cls Chassis Team Laps Total Time

PDF Scoring Guidelines for 2018 AP Physics 1 College Board secure media collegeboard ap pdf ap18 sg physics 1 pdf PDF AP Physics 1 2018 Free Response Questions The College Board secure media collegeboard

1

SR 1, Report of Traffic Accident Occuring in California - DMV

PDF Scoring Guidelines for 2018 AP Physics 1 The College Board secure media collegeboard ap pdf ap18 sg physics 1 pdf PDF AP Physics 1 2018 Free Response Questions The College Board secure media collegeboard

Home back10251026 1027102810291030 Next

Description

Exam 1z0-062 Oracle Database 12c: Installation and Administration

Question No : 1 Examine the parameters for your database instance: NAMETYPE VALUE

Which statement is true in this scenario

Undo data is written to flashback logs after 1200 seconds

Inactive undo data is retained for 1200 seconds even if subsequent transactions fail due to lack of space in the undo tablespace

You can perform a Flashback Database operation only within the duration of 1200 seconds

An attempt is made to keep inactive undo for 1200 seconds but transactions may overwrite the undo before that time has elapsed

Answer: A

Question No : 2 A user establishes a connection to a database instance by using an Oracle Net connection

You want to ensure the following: 1

The user account must be locked after five unsuccessful login attempts

Data read per session must be limited for the user

The user must have a maximum of 10 minutes session idle time before being logged off 2

How would you accomplish this

by granting a secure application role to the user B

by implementing Database Resource Manager C

by using Oracle Label Security options D

by assigning a profile to the user Answer: D

Question No : 3 As a user of the ORCL database, you establish a database link to the remote HQ database such that all users in the ORCL database may access tables only from the SCOTT schema in the HQ database

SCOTT’s password is TIGER

The service mane “HQ” is used to connect to the remote HQ database

Which command would you execute to create the database link

CREATE DATABASE LINK HQ USING 'HQ'

CREATE DATABASE LINK HQ CONNECT TO CXJRRENT_USER USING HQ' S C

CREATE PUBLIC DATABASE LINK HQ CONNECT TO scott IDENTIFIED BY tiger USING 'HQ'

CREATE DATABASE LINK HQ CONNECT TO scott IDENTIFIED BY tiger USING 'HQ'

Answer: B

Question No : 4 What happens if a maintenance window closes before a job that collects optimizer statistics completes

The job is terminated and the gathered statistics are not saved

The job is terminated but the gathered statistics are not published

The job continues to run until all statistics are gathered

The job is terminated and statistics for the remaining objects are collected the next time the maintenance window opens

Answer: D

Question No : 5 You plan to create a database by using the Database Configuration Assistant (DBCA), with the following specifications: ✑ Applications will connect to the database via a middle tier

✑ The number of concurrent user connections will be high

✑ The database will have mixed workload, with the execution of complex BI queries scheduled at night

Which DBCA option must you choose to create the database

a General Purpose database template with default memory allocation B

a Data Warehouse database template, with the dedicated server mode option and AMM enabled C

a General Purpose database template, with the shared server mode option and Automatic Memory Management (AMM) enabled D

a default database configuration Answer: C Reference: http://www

com/oracle-database/administration/creating-adatabase-using-database-configuration-assistant/

Question No : 6 Which two statements are true about the logical storage structure of an Oracle database

An extent contains data blocks that are always physically contiguous on disk

An extent can span multiple segments, C

Each data block always corresponds to one operating system block

It is possible to have tablespaces of different block sizes

A data block is the smallest unit of I/O in data files

Answer: B,D Reference: http://docs

Question No : 7 Which two statements correctly describe the relationship between data files and logical database structures

A segment cannot span data files

A data file can belong to only one tablespace

An extent cannot span data files

The size of an Oracle data block in a data file should be the same as the size of an OS block

Answer: B,C Reference: https://mohibalvi

Question No : 8 Which statement is true about the Log Writer process

It writes when it receives a signal from the checkpoint process (CKPT)

It writes concurrently to all members of multiplexed redo log groups

It writes after the Database Writer process writes dirty buffers to disk

It writes when a user commits a transaction

Answer: D'Reference: http://docs

htm (see log writer process (LGWR))

Question No : 9 The ORCL database is configured to support shared server mode

You want to ensure that a user connecting remotely to the database instance has a one-to-one ratio between client and server processes

Which connection method guarantees that this requirement is met

connecting by using an external naming method B

connecting by using the easy connect method C

creating a service in the database by using the dbms_service

create_service procedure and using this service for creating a local naming service" D

connecting by using the local naming method with the server = dedicated parameter set in the tnsnames

ora file for the net service E

connecting by using a directory naming method Answer: C,E

Question No : 10 Which two tasks can be performed on an external table

updating the table by using an update statement D

Question No : 11 Which three statements are true about a job chain

It can contain a nested chain of jobs

It can be used to implement dependency-based scheduling

It cannot invoke the same program or nested chain in multiple steps in the chain

It cannot have more than one dependency

It can be executed using event-based or time-based schedules

Answer: A,B,E Reference: http://docs

Question No : 12 The hr user receiver, the following error while inserting data into the sales table: ERROR at line 1: ORA-01653

SALES by 128 in tablespace USERS On investigation, you find that the users tablespace uses Automnrif Segment Space Management (ASSM)

It is the default tablespace for the HR user with an unlimited quota on it

Which two methods would you use to resolve this error

Altering the data life associated with the USERS tablespace to ex automatically B

Adding a data life to the USERS tablespace C

Changing segment space management for the USERS tablespace to manual D

Creating a new tablespace with autoextend enabled and changing the default tablespace of the HR user to the new tablespace E

Enabling resumable space allocation by setting the RESUMABLE_TIMEOUT parameter to a nonzero value Answer: A,D

Question No : 13 Which three factors influence the optimizer's choice of an execution plan

the optimizer_mode initialization parameter B

operating system (OS) statistics C

object statistics in the data dictionary E

fixed baselines Answer: A,B Reference: http://docs

Question No : 14 7

Examine the resources consumed by a database instance whose current Resource Manager plan is displayed

SQL> SELECT name, active_sessions, queue_length, Consumed_cpu_time, cpu_waits, cpu_wait_time FROM v$rsrc_consumer_group

NAMEACTIVE_SESSIONS QUEUE_LENGTH CONSUMED_CPU_WAITS CPU_WAIT_TIME

S_QUERIES4245946603004 55700 Which two statements are true

An attempt to start a new session by a user belonging to DSS_QUERIES fails with an error

An attempt to start a new session by a user belonging to OTHE_GROUPS fails with an error

The CPU_WAIT_TIME column indicates the total time that sessions in the consumer group waited for the CPU due to resource management

The CPU_WAIT_TIME column indicates the total time that sessions in the consumer group waited for the CPU due to I/O waits and latch or enqueue contention

A user belonging to the DSS__QUERIES resource consumer group can create a new session but the session will be queued

Answer: C,E

Question No : 15 Which action takes place when a file checkpoint occurs

The checkpoint position is advanced in the checkpoint queue

All buffers for a checkpointed file that were modified before a specific SCN are written to disk by DBWn and the SCN is stored in the control file

The Database Writer process (DBWn) writes all dirty buffers in the buffer cache to data files

The Log Writer process (LGWR) writes all redo entries in the log buffer to online redo log files

Answer: C

Question No : 16 Examine the structure of the sales table, which is stored in a locally managed tablespace with Automatic Segment Space Management (ASSM) enabled

NameNull

What should you ensure before the start of the operation

Row movement is enabled

Referential integrity constraints for the table are disabled

No queries are running on this table

Extra disk space equivalent to the size of the segment is available in the tablespace

No pending transaction exists on the table

Answer: D

Question No : 17 Which task would you recommend before using the Database Upgrade Assistant (DBUA) to upgrade a single-instance Oracle 11g R2 database to Oracle Database 12c

shutting down the database instance that is being upgraded B

pl script to run the upgrade processes in parallel C

running the Pre-Upgrade Information Tool D

ora file to the new ORACLE_HOME Answer: C Reference: http://docs

Question No : 18 Your database is open and the listener LISTNENER is up

You issue the command: LSNRCTL> RELOAD What is the effect of reload on sessions that were originally established by listener

Only sessions based on static listener registrations are disconnected

Existing connections are not disconnected

however, they cannot perform any operations until the listener completes the re-registration of the database instance and service handlers

The sessions are not affected and continue to function normally

All the sessions are terminated and active transactions are rolled back

Answer: B 10

Question No : 19 Which statement is true regarding the startup of a database instance

The instance does not start up normally and requires manual media recovery after a shutdown using the abort option

Uncommitted transactions are rolled back during the startup of the database instance after a shutdown using the immediate option

There is no difference in the underlying mechanics of the startup whether the database is shut down by using the immediate option or the abort option

Media recovery is required when the database is shut down by using either the immediate option or the abort option

Instance recovery is not required if the database instance was shut down by using SHUTDOWN IMMEDIATE

Answer: B Reference: http://docs

Question No : 20 Examine the memory-related parameters set in the SPFILE of an Oracle database: memory_max_target—6G memory_target=5G pga_aggregate_target=500M sga_max_size=0 sga_target=0 Which statement is true

Only SGA components are sized automaticallyB

Memory is dynamically re-allocated between the SGA and PGA as needed

The size of the PGA cannot grow automatically beyond 500 MB

The value of the MEMORY_TARGET parameter cannot be changed dynamically

Answer: C

Question No : 21 Which two statements are true about extents

Blocks belonging to an extent can be spread across multiple data files

Data blocks in an extent are logically contiguous but can be non-contiguous on disk

The blocks of a newly allocated extent, although free, may have been used before

Data blocks in an extent are automatically reclaimed for use by other objects in a tablespaee when all the rows in a table are deleted

Answer: B,C

Question No : 22 You execute the commands: SQL>CREATE USER sidney IDENTIFIED BY out_standing1 DEFAULT TABLESPACE users QUOTA 10M ON users TEMPORARY TABLESPACE temp ACCOUNT UNLOCK

SQL> GRANT CREATE SESSION TO Sidney

Which two statements are true

The create user command fails if any role with the name Sidney exists in the database

The user sidney can connect to the database instance but cannot perform sort operations because no space quota is specified for the temp tablespace

The user sidney is created but cannot connect to the database instance because no 12

The user sidney can connect to the database instance but requires relevant privileges to create objects in the users tablespace

The user sidney is created and authenticated by the operating system

Answer: A,E

Question No : 23 Examine the query and its output: SQL> SELECT REASON, metric_value FROM dba_outstanding_alerts

REASONMETRIC_VALUE

After 30 minutes, you execute the same query: SQL> SELECT reason, metric_value FROM dba_outstanding_alerets

REASONMETRIC_VALUE

The threshold alerts were cleared and transferred to d0A_alert_history

An Automatic Workload Repository (AWR) snapshot was taken before the execution of the second C

An Automatic Database Diagnostic Monitor (ADOM) report was generated before the execution of the second query

The database instance was restarted before the execution of the second query

Answer: D

Question No : 24 Which two statements are true

A role cannot be assigned external authentication

A role can be granted to other roles

A role can contain both system and object privileges

The predefined resource role includes the unlimited_tablespace privilege

All roles are owned by the sys user

The predefined connect role is always automatically granted to all new users at the time of their creation

Answer: B,C Reference: http://docs

htm#DBSEG9987 8 (the functionality of roles)

Question No : 25

Identify three valid options for adding a pluggable database (PDB) to an existing multitenant container database (CDB)

Use the CREATE PLUGGABLE DATABASE statement to create a PDB using the files from the SEED

Use the CREATE DATABASE

ENABLE PLUGGABLE DATABASE statement to provision a PDB by copying file from the SEED

Use the DBMS_PDB package to clone an existing PDB

Use the DBMS_PDB package to plug an Oracle 12c non-CDB database into an existing CDB

Use the DBMS_PDB package to plug an Oracle 11 g Release 2 (11

Answer: A,C,D Explanation: Use the CREATE PLUGGABLE DATABASE statement to create a pluggable database (PDB)

This statement enables you to perform the following tasks: * (A) Create a PDB by using the seed as a template Use the create_pdb_from_seed clause to create a PDB by using the seed in the multitenant container database (CDB) as a template

The files associated with the seed are copied to a new location and the copied files are then associated with the new PDB

The files associated with the source PDB are copied to a new location and the copied files are associated with the new PDB

This operation is called cloning a PDB

The source PDB can be plugged in or unplugged

If plugged in, then the source PDB can be in the same CDB or in a remote CDB

If the source PDB is in a remote CDB, then a database link is used to connect to the remote CDB and copy the files

Question No : 26 Your database supports a DSS workload that involves the execution of complex queries: Currently, the library cache contains the ideal workload for analysis

You want to analyze some of the queries for an application that are cached in the library cache

What must you do to receive recommendations about the efficient use of indexes and materialized views to improve query performance

Create a SQL Tuning Set (STS) that contains the queries cached in the library cache and run the SQL Tuning Advisor (STA) on the workload captured in the STS

Run the Automatic Workload Repository Monitor (ADDM)

Create an STS that contains the queries cached in the library cache and run the SQL Performance Analyzer (SPA) on the workload captured in the STS

Create an STS that contains the queries cached in the library cache and run the SQL Access Advisor on the workload captured in the STS

Answer: D'Explanation: * SQL Access Advisor is primarily responsible for making schema modification recommendations, such as adding or dropping indexes and materialized views

SQL Tuning Advisor makes other types of recommendations, such as creating SQL profiles and restructuring SQL statements

By using SQL Tuning Advisor and SQL Access Advisor, you can invoke the query optimizer in advisory mode to examine a SQL statement or set of statements and determine how to improve their efficiency

SQL Tuning Advisor and SQL Access Advisor can make various recommendations, such as creating SQL profiles, restructuring SQL statements, creating additional indexes or materialized views, and refreshing optimizer statistics

Note: * Decision support system (DSS) workload * The library cache is a shared pool memory structure that stores executable SQL and PL/SQL code

This cache contains the shared SQL and PL/SQL areas and control structures such as locks and library cache handles

Reference: Tuning SQL Statements

Question No : 27 16

The following parameter are set for your Oracle 12c database instance: OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE OPTIMIZER_USE_SQL_PLAN_BASELINES=TRUE You want to manage the SQL plan evolution task manually

Examine the following steps: 1

Set the evolve task parameters

Create the evolve task by using the DBMS_SPM

CREATE_EVOLVE_TASK function

Implement the recommendations in the task by using the DBMS_SPM

IMPLEMENT_EVOLVE_TASK function

Execute the evolve task by using the DBMS_SPM

EXECUTE_EVOLVE_TASK function

Report the task outcome by using the DBMS_SPM

REPORT_EVOLVE_TASK function

Identify the correct sequence of steps: A

2, 4, 5 B

Description of Figure 23-4 follows * 2

Create the evolve task by using the DBMS_SPM

CREATE_EVOLVE_TASK function

This function creates an advisor task to prepare the plan evolution of one or more plans for a specified SQL statement

The input parameters can be a SQL handle, plan name or a list of plan names, time limit, task name, and description

Set the evolve task parameters

SET_EVOLVE_TASK_PARAMETER This function updates the value of an evolve task parameter

In this release, the only valid parameter is TIME_LIMIT

Execute the evolve task by using the DBMS_SPM

EXECUTE_EVOLVE_TASK function

This function executes an evolution task

The input parameters can be the task name, execution name, and execution description

If not specified, the advisor generates the name, which is returned by the function

Essentially, this function is equivalent to using ACCEPT_SQL_PLAN_BASELINE for all recommended plans

Input parameters include task name, plan name, owner name, and execution name

Report the task outcome by using the DBMS_SPM_EVOLVE_TASK function

This function displays the results of an evolve task as a CLOB

Input parameters include the task name and section of the report to include

Reference: Oracle Database SQL Tuning Guide 12c, Managing SQL Plan Baselines

Question No : 28 In a recent Automatic Workload Repository (AWR) report for your database, you notice a high number of buffer busy waits

The database consists of locally managed tablespaces with free list managed segments

On further investigation, you find that buffer busy waits is caused by contention on data blocks

Which option would you consider first to decrease the wait event immediately

Decreasing PCTUSED B

Decreasing PCTFREE C

Increasing the number of DBWN process D

Using Automatic Segment Space Management (ASSM) E

Increasing db_buffer_cache based on the V$DB_CACHE_ADVICE recommendation Answer: D'Explanation: * Automatic segment space management (ASSM) is a simpler and more efficient way of managing space within a segment

It completely eliminates any need to specify and tune the pctused,freelists, and freelist groups storage parameters for schema objects created in the tablespace

If any of these attributes are specified, they are ignored

ASSM is commonly called "bitmap freelists" because that is how Oracle implement the internal data structures for free block management

Note: * Buffer busy waits are most commonly associated with segment header contention onside the data buffer pool (db_cache_size, etc

Question No : 29 Examine this command: SQL > exec DBMS_STATS

SET_TABLE_PREFS (‘SH’, ‘CUSTOMERS’, ‘PUBLISH’, ‘false’)

Which three statements are true about the effect of this command

Statistics collection is not done for the CUSTOMERS table when schema stats are gathered

Statistics collection is not done for the CUSTOMERS table when database stats are gathered

Any existing statistics for the CUSTOMERS table are still available to the optimizer at parse time

Statistics gathered on the CUSTOMERS table when schema stats are gathered are stored as pending statistics

Statistics gathered on the CUSTOMERS table when database stats are gathered are stored as pending statistics

Answer: C,D,E Explanation: * SET_TABLE_PREFS Procedure This procedure is used to set the statistics preferences of the specified table in the specified schema

To ensure that the cost-based optimizer is still picking the best plan, statistics should be gathered once again

however, the user is concerned that new statistics will cause the optimizer to choose bad plans when the current ones are acceptable

The user can do the following: EXEC DBMS_STATS

SET_TABLE_PREFS('hr', 'employees', 'PUBLISH', 'false')

By setting the employees tables publish preference to FALSE, any statistics gather from now on will not be automatically published

The newly gathered statistics will be marked as pending

Question No : 30 Examine the following impdp command to import a database over the network from a pre12c Oracle database (source):

Which three are prerequisites for successful execution of the command

The import operation must be performed by a user on the target database with the

DATAPUMP_IMP_FULL_DATABASE role, and the database link must connect to a user on the source database with the DATAPUMP_EXD_FULL_DATABASE role

All the user-defined tablespaces must be in read-only mode on the source database

The export dump file must be created before starting the import on the target database

The source and target database must be running on the same platform with the same endianness

The path of data files on the target database must be the same as that on the source database

The impdp operation must be performed by the same user that performed the expdp operation

Answer: A,B,D Explanation: In this case we have run the impdp without performing any conversion if endian format is different then we have to first perform conversion

Question No : 31 Which two are true concerning a multitenant container database with three pluggable database

All administration tasks must be done to a specific pluggable database

The pluggable databases increase patching time

The pluggable databases reduce administration effort

The pluggable databases are patched together

Pluggable databases are only used for database consolidation

Answer: C,E Explanation: The benefits of Oracle Multitenant are brought by implementing a pure deployment choice

The following list calls out the most compelling examples

(E) The many pluggable databases in a single multitenant container database share its memory and background processes, letting you operate many more pluggable databases on a particular platform than you can single databases that use the old architecture

This is the same benefit that schema-based consolidation brings

(D, not B) The investment of time and effort to patch one multitenant container database results in patching all of its many pluggable databases

To patch a single pluggable database, you simply unplug/plug to a multitenant container database at a different Oracle Database 21

By consolidating existing databases as pluggable databases, administrators can manage many databases as one

For example, tasks like backup and disaster recovery are performed at the multitenant container database level

In Oracle Database 12c, Resource Manager is extended with specific functionality to control the competition for resources between the pluggable databases within a multitenant container database

Note: * Oracle Multitenant is a new option for Oracle Database 12c Enterprise Edition that helps customers reduce IT costs by simplifying consolidation, provisioning, upgrades, and more

It is supported by a new architecture that allows a multitenant container database to hold many pluggable databases

And it fully complements other options, including Oracle Real Application Clusters and Oracle Active Data Guard

An existing database can be simply adopted, with no change, as a pluggable database

and no changes are needed in the other tiers of the application

Reference: 12c Oracle Multitenant

Question No : 32 Examine the current value for the following parameters in your database instance: SGA_MAX_SIZE = 1024M SGA_TARGET = 700M DB_8K_CACHE_SIZE = 124M LOG_BUFFER = 200M You issue the following command to increase the value of DB_8K_CACHE_SIZE: SQL> ALTER SYSTEM SET DB_8K_CACHE_SIZE=140M

Which statement is true

It fails because the DB_8K_CACHE_SIZE parameter cannot be changed dynamically

It succeeds only if memory is available from the autotuned components if SGA

It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_TARGET

It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_MAX_SIZE

Answer: D'Explanation: * The SGA_TARGET parameter can be dynamically increased up to the value specified for the SGA_MAX_SIZE parameter, and it can also be reduced

The exact value depends on environmental factors such as the number of CPUs on the system

However, the value of DB_8K_CACHE_SIZE remains fixed at all times at 128M * DB_8K_CACHE_SIZE Size of cache for 8K buffers * For example, consider this configuration: SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, increasing DB_8K_CACHE_SIZE by 16 M to 144M means that the 16M is taken away from the automatically sized components

Likewise, reducing DB_8K_CACHE_SIZE by 16M to 112M means that the 16M is given to the automatically sized components

Question No : 33 Which three statements are true concerning unplugging a pluggable database (PDB)

The PDB must be open in read only mode

The PDB must be dosed

The unplugged PDB becomes a non-CDB

The unplugged PDB can be plugged into the same multitenant container database (CDB) E

The unplugged PDB can be plugged into another CDB

The PDB data files are automatically removed from disk

Answer: B,D,E Explanation: B, not A: The PDB must be closed before unplugging it

D: An unplugged PDB contains data dictionary tables, and some of the columns in these encode information in an endianness-sensitive way

There is no supported way to handle the conversion of such columns automatically

This means, quite simply, that an unplugged PDB cannot be moved across an endianness difference

E (not F): To exploit the new unplug/plug paradigm for patching the Oracle version most effectively, the source and destination CDBs should share a filesystem so that the PDB’s datafiles can remain in place

Reference: Oracle White Paper, Oracle Multitenant

Question No : 34 Examine the following command: CREATE TABLE (prod_id number(4), Prod_name varchar2 (20), Category_id number(30), Quantity_on_hand number (3) INVISIBLE)

Which three statements are true about using an invisible column in the PRODUCTS table

The %ROWTYPE attribute declarations in PL/SQL to access a row will not display the invisible column in the output

The DESCRIBE commands in SQL *Plus will not display the invisible column in the output

Referential integrity constraint cannot be set on the invisible column

The invisible column cannot be made visible and can only be marked as unused

A primary key constraint can be added on the invisible column

Answer: A,B,E Explanation: AB: You can make individual table columns invisible

Any generic access of a table does not show the invisible columns in the table

For example, the following operations do not display invisible columns in the output: * SELECT * FROM statements in SQL * DESCRIBE commands in SQL*Plus * %ROWTYPE attribute declarations in PL/SQL * Describes in Oracle Call Interface (OCI) Incorrect: Not D: You can make invisible columns visible

You can make a column invisible during table creation or when you add a column to a table, and you can later alter the table to make the same column visible

Reference: Understand Invisible Columns

Question No : 35 You wish to enable an audit policy for all database users, except SYS, SYSTEM, and SCOTT

You issue the following statements: SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYS

SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYSTEM

SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SCOTT

For which database users is the audit policy now active

All users except SYS B

All users except SCOTT

All users except sys and SCOTT D

All users except sys, system, and SCOTT Answer: B Explanation: If you run multiple AUDIT statements on the same unified audit policy but specify different EXCEPT users, then Oracle Database uses the last exception user list, not any of the users from the preceding lists

This means the effect of the earlier AUDIT POLICY

EXCEPT statements are overridden by the latest AUDIT POLICY

EXCEPT statement

Note: * The ORA_DATABASE_PARAMETER policy audits commonly used Oracle Database parameter settings

By default, this policy is not enabled

The following example shows how to audit all actions on the HR

EMPLOYEES table, except actions by user pmulligan

Example Auditing All Actions on a Table CREATE AUDIT POLICY all_actions_on_hr_emp_pol ACTIONS ALL ON HR

EMPLOYEES

AUDIT POLICY all_actions_on_hr_emp_pol EXCEPT pmulligan

Reference: Oracle Database Security Guide 12c, About Enabling Unified Audit Policies

Question No : 36 On your Oracle 12c database, you invoked SQL *Loader to load data into the EMPLOYEES table in the HR schema by issuing the following command: $> sqlldr hr/[email protected] table=employees Which two statements are true regarding the command

It succeeds with default settings if the EMPLOYEES table belonging to HR is already defined in the database

It fails because no SQL *Loader data file location is specified

It fails if the HR user does not have the CREATE ANY DIRECTORY privilege

It fails because no SQL *Loader control file location is specified

Answer: A,C Explanation: Note: * SQL*Loader is invoked when you specify the sqlldr command and, optionally, parameters that establish session characteristics

Question No : 37 After implementing full Oracle Data Redaction, you change the default value for the NUMBER data type as follows:

After changing the value, you notice that FULL redaction continues to redact numeric data with zero

What must you do to activate the new default value for numeric full redaction

Re-enable redaction policies that use FULL data redaction

Re-create redaction policies that use FULL data redaction

Re-connect the sessions that access objects with redaction policies defined on them

Flush the shared pool

Restart the database instance

Answer: E Explanation: About Altering the Default Full Data Redaction Value You can alter the default displayed values for full Data Redaction polices

By default, 0 is the redacted value when Oracle Database performs full redaction (DBMS_REDACT

on a column of the NUMBER data type

If you want to change it to another value (for example, 7), then you can run the DBMS_REDACT

UPDATE_FULL_REDACTION_VALUES procedure to modify this value

The modification applies to all of the Data Redaction policies in the current database instance

After you modify a value, you must restart the database for it to take effect

Note: * The DBMS_REDACT package provides an interface to Oracle Data Redaction, which enables you to mask (redact) data that is returned from queries issued by low-privileged users or an application

You can redact column data by using one of the following methods: / Full redaction

Reference: Oracle Database Advanced Security Guide 12c, About Altering the Default Full Data Redaction Value

Question No : 38 You must track all transactions that modify certain tables in the sales schema for at least three years

Automatic undo management is enabled for the database with a retention of one day

Which two must you do to track the transactions

Enable supplemental logging for the database

Specify undo retention guarantee for the database

Create a Flashback Data Archive in the tablespace where the tables are stored

Create a Flashback Data Archive in any suitable tablespace

Enable Flashback Data Archiving for the tables that require tracking

Answer: D,E Explanation: E: By default, flashback archiving is disabled for any table

You can enable flashback archiving for a table if you have the FLASHBACK ARCHIVE object privilege on the Flashback Data Archive that you want to use for that table

D: Creating a Flashback Data Archive / Create a Flashback Data Archive with the CREATE FLASHBACK ARCHIVE statement, specifying the following: Name of the Flashback Data Archive Name of the first tablespace of the Flashback Data Archive (Optional) Maximum amount of space that the Flashback Data Archive can use in the first tablespace / Create a Flashback Data Archive named fla2 that uses tablespace tbs2, whose data will be retained for two years: CREATE FLASHBACK ARCHIVE fla2 TABLESPACE tbs2 RETENTION 2 YEAR

Question No : 39 Your are the DBA supporting an Oracle 11g Release 2 database and wish to move a table containing several DATE, CHAR, VARCHAR2, and NUMBER data types, and the table’s indexes, to another tablespace

The table does not have a primary key and is used by an OLTP application

Which technique will move the table and indexes while maintaining the highest level of availability to the application

Oracle Data Pump

An ALTER TABLE MOVE to move the table and ALTER INDEX REBUILD to move the indexes

An ALTER TABLE MOVE to move the table and ALTER INDEX REBUILD ONLINE to move the indexes

Online Table Redefinition

Edition-Based Table Redefinition

Answer: D'Explanation: * Oracle Database provides a mechanism to make table structure modifications without significantly affecting the availability of the table

The mechanism is called online table redefinition

Redefining tables online provides a substantial increase in availability compared to traditional methods of redefining tables

Pseudoprimary keys are unique keys with all component columns having NOT NULL constraints

For this method, the versions of the tables before and after redefinition should have the same primary key columns

This is the preferred and default method of redefinition

In this method, a hidden column named M_ROW$$ is added to the post-redefined version of the table

It is recommended that this column be dropped or marked as unused after the redefinition is complete

If COMPATIBLE is set to 10

You can then use the ALTER TABLE

DROP UNUSED COLUMNS statement to drop it

You cannot use this method on index-organized tables

Note: * When you rebuild an index, you use an existing index as the data source

Creating an index in this manner enables you to change storage characteristics or move to a new tablespace

Rebuilding an index based on an existing data source removes intra-block

Compared to dropping the index and using the CREATE INDEX statement, re-creating an existing index offers better performance

Incorrect: Not E: Edition-based redefinition enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time

Question No : 40 To implement Automatic Management (AMM), you set the following parameters:

When you try to start the database instance with these parameter settings, you receive the following error message: SQL > startup ORA-00824: cannot set SGA_TARGET or MEMORY_TARGET due to existing internal settings, see alert log for more information

Identify the reason the instance failed to start

The PGA_AGGREGATE_TARGET parameter is set to zero

The STATISTICS_LEVEL parameter is set to BASIC

Both the SGA_TARGET and MEMORY_TARGET parameters are set

The SGA_MAX_SIZE and SGA_TARGET parameter values are not equal

Answer: B 31

Explanation: Example: SQL> startup force ORA-00824: cannot set SGA_TARGET or MEMORY_TARGET due to existing internal settings ORA-00848: STATISTICS_LEVEL cannot be set to BASIC with SGA_TARGET or MEMORY_TARGET

Question No : 41 What are two benefits of installing Grid Infrastructure software for a stand-alone server before installing and creating an Oracle database

Effectively implements role separation B

Enables you to take advantage of Oracle Managed Files

Automatically registers the database with Oracle Restart

Helps you to easily upgrade the database from a prior release

Enables the Installation of Grid Infrastructure files on block or raw devices

Answer: A,C Explanation: C: To use Oracle ASM or Oracle Restart, you must first install Oracle Grid Infrastructure for a standalone server before you install and create the database

Otherwise, you must manually register the database with Oracle Restart

Desupport of Block and Raw Devices With the release of Oracle Database 11g release 2 (11

If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with Oracle ASM instances, then you can use an existing raw or block device partition, and perform a rolling upgrade of your existing installation

Performing a new installation using block or raw devices is not allowed

Reference: Oracle Grid Infrastructure for a Standalone Server, Oracle Database, Installation Guide, 12c

Question No : 42 Identify two correct statements about multitenant architectures

Multitenant architecture can be deployed only in a Real Application Clusters (RAC) configuration

Multiple pluggable databases (PDBs) share certain multitenant container database (CDB) resources

Multiple CDBs share certain PDB resources

Multiple non-RAC CDB instances can mount the same PDB as long as they are on the same server

Patches are always applied at the CDB level

A PDB can have a private undo tablespace

Answer: B,E Explanation: B: Using 12c Resource manager you will be able control CPU, Exadata I/O, sessions and parallel servers

A new 12c CDB Resource Manager Plan will use so-called “Shares” (resource allocations) to specify how CPU is distributed between PDBs

A CDB Resource Manager Plan also can use “utilization limits” to limit the CPU usage for a PDB

With a default directive, you do not need to modify the resource plan for each PDB plug and unplug

E: New paradigms for rapid patching and upgrades

The investment of time and effort to patch one multitenant container database results in patching all of its many pluggable databases

To patch a single pluggable database, you simply unplug/plug to a multitenant container database at a different Oracle Database software version

Incorrect: Not A: * The Oracle RAC documentation describes special considerations for a CDB in an Oracle RAC environment

It is supported by a new architecture that allows a container database to hold many pluggable databases

And it fully complements other options, including Oracle Real Application Clusters and Oracle Active Data Guard

An existing database can be simply adopted, with no change, as a pluggable database

and no changes are needed in the other tiers of the application

Not D: You can unplug a PDB from one CDB and plug it into a different CDB without altering your schemas or applications

A PDB can be plugged into only one CDB at a time

not F: * UNDO tablespace can NOT be local and stays on the CDB level

Question No : 43 You upgrade your Oracle database in a multiprocessor environment

As a recommended you execute the following script: SQL > @utlrp

sql Which two actions does the script perform

Parallel compilation of only the stored PL/SQL code B

Sequential recompilation of only the stored PL/SQL code C

Parallel recompilation of any stored PL/SQL code D

Sequential recompilation of any stored PL/SQL code E

Parallel recompilation of Java code F

Sequential recompilation of Java code Answer: C,E Explanation: utlrp

sql scripts are provided by Oracle to recompile all invalid objects in the database

They are typically run after major database changes such as upgrades or patches

They are located in the $ORACLE_HOME/rdbms/admin directory and provide a wrapper on the UTL_RECOMP package

The utlrp

sql script simply calls the utlprp

sql script with a command line parameter of "0"

The utlprp

sql accepts a single integer parameter that indicates the level of parallelism as follows

Reference: Recompiling Invalid Schema Objects

Question No : 44 Which statement is true concerning dropping a pluggable database (PDB)

The PDB must be open in read-only mode

The PDB must be in mount state

The PDB must be unplugged

The PDB data files are always removed from disk

A dropped PDB can never be plugged back into a multitenant container database (CDB)

Answer: C

Question No : 45 You notice a high number of waits for the db file scattered read and db file sequential read events in the recent Automatic Database Diagnostic Monitor (ADDM) report

After further investigation, you find that queries are performing too many full table scans and indexes are not being used even though the filter columns are indexed

Identify three possible reasons for this

Missing or stale histogram statistics B

Undersized shared pool C

High clustering factor for the indexes D

High value for the DB_FILE_MULTIBLOCK_READ_COUNT parameter E

Oversized buffer cache Answer: A,C,D Explanation: D: DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans

It specifies the maximum number of blocks read in one I/O operation during a sequential scan

The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation

Question No : 46 Which three features work together, to allow a SQL statement to have different cursors for the same statement based on different selectivity ranges

Bind Variable Peeking B

SQL Plan Baselines C

Adaptive Cursor Sharing D

Bind variable used in a SQL statement E

Literals in a SQL statement Answer: A,C,E Explanation: * In bind variable peeking (also known as bind peeking), the optimizer looks at the value in a bind variable when the database performs a hard parse of a statement

When a query uses literals, the optimizer can use the literal values to find the best plan

However, when a query uses bind variables, the optimizer must select the best plan without the presence of literals in the SQL text

This task can be extremely difficult

By peeking at bind values the optimizer can determine the selectivity of a WHERE clause condition as if literals had been used, thereby improving the plan

C: Oracle 11g/12g uses Adaptive Cursor Sharing to solve this problem by allowing the server to compare the effectiveness of execution plans between executions with different bind variable values

If it notices suboptimal plans, it allows certain bind variable values, or ranges of values, to use alternate execution plans for the same statement

This functionality requires no additional configuration

Question No : 47 You notice a performance change in your production Oracle 12c database

You want to know which change caused this performance difference

Which method or feature should you use

Compare Period ADDM report B

AWR Compare Period report C

Active Session History (ASH) report D

Taking a new snapshot and comparing it with a preserved snapshot Answer: B Explanation: The awrddrpt

sql report is the Automated Workload Repository Compare Period Report

The awrddrpt

sql script is located in the $ORACLE_HOME/rdbms/admin directory

Incorrect: Not A: Compare Period ADDM Use this report to perform a high-level comparison of one workload replay to its capture or to another replay of the same capture

Only workload replays that contain at least 5 minutes of database time can be compared using this report

Question No : 48 You want to capture column group usage and gather extended statistics for better cardinality estimates for the CUSTOMERS table in the SH schema

Examine the following steps: 1

Issue the SELECT DBMS_STATS

CREATE_EXTENDED_STATS (‘SH’, ‘CUSTOMERS’) FROM dual statement

Execute the DBMS_STATS

SEED_COL_USAGE (null, ‘SH’, 500) procedure

Execute the required queries on the CUSTOMERS table

Issue the SELECT DBMS_STATS

REPORT_COL_USAGE (‘SH’, ‘CUSTOMERS’) FROM dual statement

Identify the correct sequence of steps

Seed column usage Oracle must observe a representative workload, in order to determine the appropriate column groups

Using the new procedure DBMS_STATS

SEED_COL_USAGE, you tell Oracle how long it should observe the workload

Step 2: (3) You don't need to execute all of the queries in your work during this window

You can simply run explain plan for some of your longer running queries to ensure column group information is recorded for these queries

(1) Create the column groups At this point you can get Oracle to automatically create the column groups for each of the tables based on the usage information captured during the monitoring window

You simply have to call the DBMS_STATS

CREATE_EXTENDED_STATS function for each table

This function requires just two arguments, the schema name and the table name

From then on, statistics will be maintained for each column group whenever statistics are gathered on the table

Note: * DBMS_STATS

REPORT_COL_USAGE reports column usage information and records all the SQL operations the database has processed for a given object

While the optimizer has traditionally analyzed the distribution of values within a column, he does not collect value-based relationships between columns

create_extended_stats to relate the columns together

Unlike a traditional procedure that is invoked via an execute (“exec”) statement, Oracle extended statistics are created via a select statement

Question No : 49 Which three statements are true about Automatic Workload Repository (AWR)

All AWR tables belong to the SYSTEM schema

The AWR data is stored in memory and in the database

The snapshots collected by AWR are used by the self-tuning components in the database D

AWR computes time model statistics based on time usage for activities, which are displayed in the v$SYS time model and V$SESS_TIME_MODEL views

AWR contains system wide tracing and logging information

Answer: B,C,E Explanation: * A fundamental aspect of the workload repository is that it collects and persists database performance data in a manner that enables historical performance analysis

The mechanism for this is the AWR snapshot

On a periodic basis, AWR takes a “snapshot” of the current statistic values stored in the database instance’s memory and persists them to its tables residing in the SYSAUX tablespace

Question No : 50 You upgraded your database from pre-12c to a multitenant container database (CDB) containing pluggable databases (PDBs)

Examine the query and its output:

Which two tasks must you perform to add users with SYSBACKUP, SYSDG, and SYSKM privilege to the password file

Assign the appropriate operating system groups to SYSBACKUP, SYSDG, SYSKM

Grant SYSBACKUP, SYSDG, and SYSKM privileges to the intended users

Re-create the password file with SYSBACKUP, SYSDG, and SYSKM privilege and the 39

FORCE argument set to No

Re-create the password file with SYSBACKUP, SYSDG, and SYSKM privilege, and FORCE arguments set to Yes

Re-create the password file in the Oracle Database 12c format

Answer: B,D Explanation: * orapwd / You can create a database password file using the password file creation utility, ORAPWD

The syntax of the ORAPWD command is as follows: orapwd FILE=filename [ENTRIES=numusers] [FORCE={y|n}] [ASM={y|n}] [DBUNIQUENAME=dbname] [FORMAT={12|legacy}] [SYSBACKUP={y|n}] [SYSDG={y|n}] [SYSKM={y|n}] [DELETE={y|n}] [INPUT_FILE=input-fname]

/ 10c: sts users who have been granted SYSDBA and SYSOPER privileges as derived from the password file

ColumnDatatypeDescription USERNAMEVARCHAR2(30)The name of the user that is contained in the password file SYSDBAVARCHAR2(5)If TRUE, the user can connect with SYSDBA privileges SYSOPERVARCHAR2(5)If TRUE, the user can connect with SYSOPER privileges Incorrect: not E: The format of the v$PWFILE_users file is already in 12c format

Question No : 51 An application accesses a small lookup table frequently

You notice that the required data blocks are getting aged out of the default buffer cache

How would you guarantee that the blocks for the table never age out

Configure the KEEP buffer pool and alter the table with the corresponding storage clause

Increase the database buffer cache size

Configure the RECYCLE buffer pool and alter the table with the corresponding storage clause

Configure Automata Shared Memory Management

Configure Automatic Memory ManagementAnswer: A Explanation: Schema objects are referenced with varying usage patterns

therefore, their cache behavior may be quite different

Multiple buffer pools enable you to address these differences

You can use a KEEP buffer pool to maintain objects in the buffer cache and a RECYCLE buffer pool to prevent objects from consuming unnecessary space in the cache

When an object is allocated to a cache, all blocks from that object are placed in that cache

Oracle maintains a DEFAULT buffer pool for objects that have not been assigned to one of the buffer pools

Question No : 52 You conned using SQL Plus to the root container of a multitenant container database (CDB) with SYSDBA privilege

The CDB has several pluggable databases (PDBs) open in the read/write mode

There are ongoing transactions in both the CDB and PDBs

What happens alter issuing the SHUTDOWN TRANSACTIONAL statement

The shutdown proceeds immediately

The shutdown proceeds as soon as all transactions in the PDBs are either committed or rolled hack

The shutdown proceeds as soon as all transactions in the CDB are either committed or rolled back

The shutdown proceeds as soon as all transactions in both the CDB and PDBs are either committed or rolled back

The statement results in an error because there are open PDBs

Answer: B Explanation: * SHUTDOWN [ABORT | IMMEDIATE | NORMAL | TRANSACTIONAL [LOCAL]] Shuts down a currently running Oracle Database instance, optionally closing and dismounting a database

If the current database is a pluggable database, only the pluggable database is closed

The consolidated instance continues to run

Shutdown commands that wait for current calls to complete or users to disconnect such as SHUTDOWN NORMAL and SHUTDOWN TRANSACTIONAL have a time limit that the SHUTDOWN command will wait

If all events blocking the shutdown have not occurred within the time limit, the shutdown command cancels with the following message: ORA-01013: user requested cancel of current operation * If logged into a CDB, shutdown closes the CDB instance

To shutdown a CDB or non CDB, you must be connected to the CDB or non CDB instance that you want to close, and then enter SHUTDOWN Database closed

Database dismounted

Oracle instance shut down

To shutdown a PDB, you must log into the PDB to issue the SHUTDOWN command

SHUTDOWN Pluggable Database closed

Note: * Prerequisites for PDB Shutdown

When the current container is a pluggable database (PDB), the SHUTDOWN command can only be used if: The current user has SYSDBA, SYSOPER, SYSBACKUP, or SYSDG system privilege

The privilege is either commonly granted or locally granted in the PDB

The current user exercises the privilege using AS SYSDBA, AS SYSOPER, AS SYSBACKUP, or AS SYSDG at connect time

To close a PDB, the PDB must be open

Question No : 53 You are planning the creation of a new multitenant container database (CDB) and want to store the ROOT and SEED container data files in separate directories

You plan to create the database using SQL statements

Which three techniques can you use to achieve this

Use Oracle Managed Files (OMF)

Specify the SEED FILE_NAME_CONVERT clause

Specify the PDB_FILE_NAME_CONVERT initialization parameter

Specify the DB_FILE_NAMECONVERT initialization parameter

Specify all files in the CREATE DATABASE statement without using Oracle managed Files (OMF)

Answer: A,B,C Explanation: You must specify the names and locations of the seed's files in one of the following ways: * (A) Oracle Managed Files * (B) The SEED FILE_NAME_CONVERT Clause * (C) The PDB_FILE_NAME_CONVERT Initialization Parameter

Question No : 54 You are about to plug a multi-terabyte non-CDB into an existing multitenant container database (CDB)

The characteristics of the non-CDB are as follows: ✑ ✑ ✑ ✑

Version: Oracle Database 11g Release 2 (11

The characteristics of the CDB are as follows: ✑ ✑ ✑ ✑

Version: Oracle Database 12c Release 1 64-bit Character Set: AL32UTF8 National character set: AL16UTF16 O/S: Oracle Linux 6 64-bit

Which technique should you use to minimize down time while plugging this non-CDB into the CDB

Transportable database B

Transportable tablespace C

Data Pump full export/import D

The DBMS_PDB package E

RMAN Answer: B Explanation: * Overview, example:

DESCRIBE to create an XML file describing the database