Thursday, May 31, 2018

Datapump Export Fails with ORA-1950: "no privileges on tablespace" (Doc ID 1633698.1)

In this Document
Symptoms
Cause
Solution
References


APPLIES TO:

Oracle Database - Enterprise Edition - Version 11.2.0.4 and later
Information in this document applies to any platform.
***Checked for relevance on 17-Aug-2015***

SYMPTOMS

The datapump export log shows errors like:
ORA-31642: the following SQL statement fails:
BEGIN "SYS"."DBMS_RULE_EXP_RULES".SCHEMA_CALLOUT(:1,0,1,'11.02.00.04.00'); END;
ORA-01950: no privileges on tablespace 'SYSAUX'

CAUSE

More information about the error is found after setting event 1950 and reproducing the problem, e.g.
SQL> connect / as sysdba
SQL> alter system set events '1950 trace name ERRORSTACK level 3';

Afterward reproducing the error as the SYSTEM user, the event can be turned off again using:
SQL> alter system set events '1950  trace name ERRORSTACK off';

The generated trace file showed next create sql statement:
----- Current SQL Statement for this session (sql_id=d72dvzc8mfmpb) -----
create table "SYSMAN".re$action_imp_tab(owner varchar2(30), name varchar2(30), action sys.re$nv_node) tablespace SYSAUX

In this case, the SYSMAN user is not able to create objects in the  sysaux tablespace.

SOLUTION

Grant unlimited quota on SYSAUX tablespace to the SYSMAN user, i.e.:
SQL> alter user SYSMAN quota unlimited on  sysaux;

REFERENCES

NOTE:1531789.1 - DataPump Export (EXPDP) Fails With Errors ORA-39126 ORA-06512 ORA-1950

DataPump Export (EXPDP) Fails With Errors ORA-39126 ORA-06512 ORA-1950 (Doc ID 1531789.1)

In this Document
Symptoms
Cause
Solution
References


APPLIES TO:

Oracle Database - Enterprise Edition - Version 11.2.0.3 and later
Information in this document applies to any platform.
***Checked for relevance on 28-Aug-2014***

SYMPTOMS

DataPump export fails with the following errors:
Export: Release 11.2.0.3.0 - Production on Tue Nov 25 17:33:24 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Starting "ORACLE"."EXP_EST_RDEDWP":  /******** parfile=exp_est_only.par
Estimate in progress using BLOCKS method...
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.FETCH_XML_OBJECTS []
ORA-31642: the following SQL statement fails:
BEGIN "SYS"."DBMS_CDC_EXPDP".SCHEMA_CALLOUT(:1,0,1,'11.02.00.03.00'); END;
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_METADATA", line 1749
ORA-01950: no privileges on tablespace 'SYSAUX'
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 8996
----- PL/SQL Call Stack -----
object      line  object
handle    number  name
c00000011cd3ce58     20462  package body SYS.KUPW$WORKER
c00000011cd3ce58      9028  package body SYS.KUPW$WORKER
c00000011cd3ce58     10935  package body SYS.KUPW$WORKER
c00000011cd3ce58     13698  package body SYS.KUPW$WORKER
c00000011cd3ce58      4633  package body SYS.KUPW$WORKER
c00000011cd3ce58      9753  package body SYS.KUPW$WORKER
c00000011cd3ce58      1775  package body SYS.KUPW$WORKER
c00000011caa2978         2  anonymous block
Job "ORACLE"."EXP_EST_RDEDWP" stopped due to fatal error at 17:33:41

CAUSE

If the Change Data Capture feature is in use, then Export Data Pump creates a temporary table to hold the information of the change tables, which is then exported from the source, in order to be imported into the target database.
The table is created in the schema that holds the change table and will be located in the SYSAUX tablespace. You can check which schema's are involved with the following.
1. Create an errorstack of the error ORA-1950 retrieved as below:
SQL> connect / as sysdba

SQL> alter system set events '1950 trace name ERRORSTACK level 3';
Then repeat the export with the system.
Turn off the event using:
SQL> alter system set events '1950 trace name ERRORSTACK off';
The trace file for the DW00 shows the following DML statements:
ORA-01950: no privileges on tablespace 'SYSAUX'
----- Current SQL Statement for this session (sql_id=c8tgknkym241j) -----
INSERT INTO "CDCSUB".CDC$_IMPVLDTAB_113404 VALUES ('EDM_STAGE_ORDERS', 4, NULL)
INSERT INTO "CDCPUB".CDC$_IMPVLDTAB_113701 VALUES (:"SYS_B_0", :"SYS_B_1", NULL);
Based on the cursor generated for ORA-1950, the issue is generated because users CDCSUB and CDCPUB do not have quota on tablespace SYSAUX. The schema names CDCSUB and CDCPUB are an example, and can be any schema that has the Change Data Capture active.
-- or: --
2. Alternatively, you can also query the following table and look for the schema names in the column change_table_schema:
SQL> select * from change_tables;

CHANGE_TABLE_SCHEMA            CHANGE_TABLE_NAME              CHANGE_SET_NAME                SOURCE_SCHEMA_NAME             SOURCE_TABLE_NAME              CREATED    CREATED_SCN C       PUB_ID
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ --------- ------------ - ------------
CDC                            EMP_CDC                        SYNC_SET                       SCOTT                          EMP                            10-DEC-14     14179663 B        94378
In this example the schema CDC would be responsible for the errors.

SOLUTION

Grant the required quota:
SQL> alter user CDCSUB quota unlimited on SYSAUX;

SQL> alter user CDCPUB quota unlimited on SYSAUX;
or for the second example:
SQL> alter user CDC quota unlimited on SYSAUX;

Then repeat the export.

REFERENCES

NOTE:454849.1 - ORA-1950 Or ORA-1536 During Full Or Schema Database Export
NOTE:19238.1 - OERR: ORA-1950 "no privileges on tablespace '%s'" Reference Note

How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN (Doc ID 371556.1)

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.1.0.2 to 12.1.0.1 [Release 10.1 to 12.1]
Oracle Database - Enterprise Edition - Version 12.1.0.2 to 12.1.0.2 [Release 12.1]
Oracle Database - Enterprise Edition - Version 9.2.0.3 to 9.2.0.3 [Release 9.2]
Information in this document applies to any platform.
******************* WARNING *************

Document 1334152.1 Corrupt IOT when using Transportable Tablespace to HP from different OS
Document 13001379.8 Bug 13001379 - Datapump transport_tablespaces produces wrong dictionary metadata for some tables 


GOAL

Starting with Oracle Database 10g, you can transport tablespaces across platforms. In this note there is a step by step guide about how to do it  with ASM datafiles and with OS filesystem datafiles.
If your goal is to migrate a database to different endian platform, the following high-level steps describe how to migrate a database to a new platform using transportable tablespace:

1.- Create a new, empty database on the destination platform.
2.- Import objects required for transport operations from the source database into the destination database.
3.- Export transportable metadata for all user tablespaces from the source database.
4.- Transfer data files for user tablespaces to the destination system.
5.- Use RMAN to convert the data files to the endian format of the destination system.
6.- Import transportable metadata for all user tablespaces into the destination database.

You could also convert the datafiles at source platform and once converted transfer them to destination platform.
The MAA white paper "Platform Migration Using Transportable Tablespace" is available at

From 11.2.0.4, 12C and further, if converting to Linux x86-64 consider to follow this doc:
   Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup [1389592.1]

SOLUTION

Supported platforms

You can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported and to determine each platform's endian format (byte ordering).
SQL> COLUMN PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME                    ENDIAN_FORMAT
----------- -------------------------------- --------------
          1 Solaris[tm] OE (32-bit)          Big
          2 Solaris[tm] OE (64-bit)          Big
          7 Microsoft Windows IA (32-bit)    Little
         10 Linux IA (32-bit)                Little
          6 AIX-Based Systems (64-bit)       Big
          3 HP-UX (64-bit)                   Big
          5 HP Tru64 UNIX                    Little
          4 HP-UX IA (64-bit)                Big
         11 Linux IA (64-bit)                Little
         15 HP Open VMS                      Little
          8 Microsoft Windows IA (64-bit)    Little
          9 IBM zSeries Based Linux          Big
         13 Linux 64-bit for AMD             Little
         16 Apple Mac OS                     Big
         12 Microsoft Windows 64-bit for AMD Little
         17 Solaris Operating System (x86)   Little

If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.

Transporting the tablespace

  1. Prepare for export of the tablespace.
    • Check that the tablespace will be self contained:
      SQL> execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
      SQL> select * from sys.transport_set_violations;

      Note: these violations must be resolved before the tablespaces can be transported.
    • The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export:
      SQL> ALTER TABLESPACE TBS1 READ ONLY;
      SQL> ALTER TABLESPACE TBS2 READ ONLY;
  2. Export the metadata.
    • Using the original export utility:
      exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log transport_tablespace=y tablespaces=TBS1,TBS2
    • Using Datapump export:
      First create the directory object to be used for Datapump, like in:
      CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir' ;
      GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

      Then initiate Datapump Export:
      expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = TBS1,TBS2

      If you want to perform a transport tablespace operation with a strict containment check, use the TRANSPORT_FULL_CHECK parameter:
      expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

      If the tablespace set being transported is not self-contained then the export will fail.
  3. Use V$TRANSPORTABLE_PLATFORM to determine the endianness of each platform. You can execute the following query on each platform instance:
    SELECT tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
    FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
    WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

    If you see that the endian formats are different and then a conversion is necessary for transporting the tablespace set:
    RMAN> convert tablespace TBS1 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';

    RMAN> convert tablespace TBS2 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';

    Then copy the datafiles as well as the export dump file to the target environment.
  4. Import the transportable tablespace.
    • Using the original import utility:
      imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log transport_tablespace=y datafiles='/tmp/....','/tmp/...'
    • Using Datapump:
      CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
      GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

      Followed by:
      impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

      You can use REMAP_SCHEMA if you want to change the ownership of the transported database objects.
  5. Put the tablespaces in read/write mode:
    SQL> ALTER TABLESPACE TBS1 READ WRITE;
    SQL> ALTER TABLESPACE TBS2 READ WRITE;

Using DBMS_FILE_TRANSFER

You can also use DBMS_FILE_TRANSFER to copy datafiles to another host.
From 12c and in 11.2.0.4 DBMS_FILE_TRANSFER does the conversion by default. Using DBMS_FILE_TRANSFER the destination database converts each block when it receives a file from a platform with different endianness. Datafiles can be imported after they are moved to the destination database as part of a transportable operation without RMAN conversion.
In releases lower than 11.2.0.4  you need to follow the same steps specified above for ASM files. But if the endian formats are different then you must use the RMAN convert AFTER  transfering the files. The files cannot be copied directly between two ASM instances at different platforms.

This is an example of usage:
RMAN> CONVERT DATAFILE
      '/hq/finance/work/tru/tbs_31.f',
      '/hq/finance/work/tru/tbs_32.f',
      '/hq/finance/work/tru/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT= "/hq/finance/work/tru/", "/hq/finance/dbs/tru"
      PARALLELISM=5;

The same example, but here showing the destination being an +ASM diskgroup:
RMAN> CONVERT DATAFILE
      '/hq/finance/work/tru/tbs_31.f',
      '/hq/finance/work/tru/tbs_32.f',
      '/hq/finance/work/tru/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT="/hq/finance/work/tru/", "+diskgroup"
      PARALLELISM=5;


*** WARNING ***

  • Index Organized Tables (IOT) can become corrupt when using Transportable Tablespace (TTS) from Solaris, Linux or AIX to HP/UX.
    This is a restriction caused by BUG:9816640.
    Currently there is no patch for this issue, the Index Organized Tables (IOT) need to be recreated after the TTS.

    See Document 1334152.1 Corrupt IOT when using Transportable Tablespace to HP from different OS
    .
  • When using dropped columns, Bug:13001379 - Datapump transport_tablespaces produces wrong dictionary metadata for some tables can occur.See Document 1440203.1 for details on this alert.

Known issue Using DBMS_FILE_TRANSFER

=> Unpublished Bug 13636964 - ORA-19563 from RMAN convert on datafile copy transferred with DBMS_FILE_TRANSFER (Doc ID 13636964.8)
 Versions confirmed as being affected   
    11.2.0.3
 This issue is fixed in  
    12.1.0.1 (Base Release)
    11.2.0.4 (Future Patch Set)
  
Description

    A file transferred using DBMS_FILE_TRANSFER fails during an RMAN convert
    operation.
    eg:
     RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
     RMAN-00571: ===========================================================
     RMAN-03002: failure of conversion at target command at 01/24/2012 16:22:23
     ORA-19563: cross-platform datafile header validation failed for file +RECO/soets_9.tf
   
    Rediscovery Notes:
     If RMAN convert fails on a file transferred using DBMS_FILE_TRANSFER
     then it may be due to this bug
   
    Workaround
     Transfer the file using OS facilities.
=> Dbms_file_transfer Corrupts Dbf File When Copying between endians (Doc ID 1262965.1)

 

Additional Resources

Community: Database Utilities

Still have questions? Use the above community to search for similar discussions or start a new discussion on this subject.

Limitations on Transportable Tablespace Use

  1. The source and target database must use the same character set and national character set.
  2. You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.
  3. Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.
    • Review Table "Objects Exported and Imported in Each Mode" from the Oracle Database Utilities documentation, there are several object types that are not exported in tablespace mode.
  4. If the owner/s of tablespace objects does not exist on target database, the usernames need to be created manually before starting the transportable tablespace import.

    • If you use spatial indexes, then:
      • be aware that TTS across different endian platforms are not supported for spatial indexes in 10gR1 and 10gR2; such a limitation has been released in 11g
      • specific Spatial packages must be run before exporting and after transportation, please see Oracle Spatial documentation.
  5. Beginning with Oracle Database 11g Release 1, you must use only Data Pump to export and import the tablespace metadata for tablespaces that contain XMLTypes.

    The following query returns a list of tablespaces that contain XMLTypes:
    select distinct p.tablespace_name
    from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
    where t.table_name=x.table_name and
          t.tablespace_name=p.tablespace_name and
          x.owner=u.username;

    Transporting tablespaces with XMLTypes has the following limitations:
    1. The target database must have XML DB installed.
    2. Schemas referenced by XMLType tables cannot be the XML DB standard schemas.
    3. Schemas referenced by XMLType tables cannot have cyclic dependencies.
    4. Any row level security on XMLType tables is lost upon import.
    5. If the schema for a transported XMLType table is not present in the target database, it is imported and registered. If the schema already exists in the target databasean error is returned unless the ignore=y option is set.
  6. Advanced Queues Transportable tablespaces do not support 8.0-compatible advanced queues with multiple recipients.
  7. You cannot transport the SYSTEM tablespace or objects owned by the user SYS.
  8. Opaque Types Types(such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform.
  9. Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump but not the original export utility, EXP.
  10. Please also check Document 1454872.1 - Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable

Transportable tablespace EXP/IMP of ASM files

  • Using RMAN CONVERT

    There is no direct way to exp/imp ASM files as transportable tablespace. However, the funcationality can be done via RMAN.

    You must follow this steps:
    1. Prepare for exporting the tablespace.

      • Check that the tablespace will be self contained:
        SQL>execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
        SQL> select * from sys.transport_set_violations;

        Note: these violations must be resolved before the tablespaces can be transported.
      • The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export:
        SQL> ALTER TABLESPACE TBS1 READ ONLY;
        SQL> ALTER TABLESPACE TBS2 READ ONLY;
    2. Export the metadata.
      • Using the original export utility:
        exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log transport_tablespace=y tablespaces=TBS1,TBS2
      • Using Datapump Export:
        CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
        GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

        followed by:
        expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = TBS1,TBS2

        If you want to perform a transport tablespace operation with a strict containment check, use the TRANSPORT_FULL_CHECK parameter:
        expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

      If the tablespace set being transported is not self-contained, then the export will fail.
    3. Use V$TRANSPORTABLE_PLATFORM to find the exact platform name of target database. You can execute the following query on target platform instance:
      SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
      FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
      WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
    4. Generate an OS file from the ASM file, in target platform format:
      RMAN> CONVERT TABLESPACE TBS1
            TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
      RMAN> CONVERT TABLESPACE TBS2
            TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
    5. Copy the generated file to target server if different from source.
    6. Import the transportable tablespace
      • Using the original import utility:
        imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log transport_tablespace=y datafiles='/tmp/....','/tmp/...'
      • Using Datapump Import:
        CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
        GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

        followed by:
        impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

        You can use REMAP_SCHEMA if you want to change the ownership of the transported database objects.
    7. Put the tablespaces in read/write mode:
      SQL> ALTER TABLESPACE TBS1 READ WRITE;
      SQL> ALTER TABLESPACE TBS2 READ WRITE;

      If you want to transport the datafiles from ASM area to filesystem, you have finished after the above steps. But if you want to transport tablespaces between two ASM areas you must continue.
    8. Copy the datafile '/tmp/....dbf' into the ASM area using rman:
      rman nocatalog target /
      RMAN> backup as copy datafile '/tmp/....dbf' format '+DGROUPA';

      where +DGROUPA is the name of the ASM diskgroup.
    9. Switch the datafile to the copy.
      If the 10g database is open you need to offline the datafile first:
      SQL> alter database datafile '/tmp/....dbf' offline;

      Switch to the copy:
      rman nocatalog target /
      RMAN> switch datafile '/tmp/....dbf' to copy;

      Note down the name of the copy created in the +DGROUPA diskgroup, ex. '+DGROUPA/s101/datafile/tts.270.5'.
    10. Put the datafile online again, we need to recover it first:
      SQL> recover datafile '+DGROUPA/s101/datafile/tts.270.5';
      SQL> alter database datafile '+DGROUPA/s101/datafile/tts.270.5' online;
    11. Check if datafile is indeed part of the ASM area and online:
      SQL> select name, status from v$datafile;

      The output should be:
      +DGROUPA/s101/datafile/tts.270.5 ONLINE
  • Using DBMS_FILE_TRANSFER

    You can also use DBMS_FILE_TRANSFER to copy datafiles from one ASM disk group to another, even on another host. Starting with 10g release 2 you can also use DBMS_FILE_TRANSFER also to copy datafiles from ASM to filesystem and to filesystem to ASM.

    The PUT_FILE procedure reads a local file or ASM and contacts a remote database to create a copy of the file in the remote file system. The file that is copied is the source file, and the new file that results from the copy is the destination file. The destination file is not closed until the procedure completes successfully.

    Syntax:
    DBMS_FILE_TRANSFER.PUT_FILE(
       source_directory_object       IN  VARCHAR2,
       source_file_name              IN  VARCHAR2,
       destination_directory_object  IN  VARCHAR2,
       destination_file_name         IN  VARCHAR2,
       destination_database          IN  VARCHAR2);

    Where:
    • source_directory_object: The directory object from which the file is copied at the local source site. This directory object must exist at the source site.
    • source_file_name: The name of the file that is copied from the local file system. This file must exist in the local file system in the directory associated with the source directory object.
    • destination_directory_object: The directory object into which the file is placed at the destination site. This directory object must exist in the remote file system.
    • destination_file_name: The name of the file placed in the remote file system. A file with the same name must not exist in the destination directory in the remote file system.
    • destination_database: The name of a database link to the remote database to which the file is copied.

    If we want to use DBMS_FILE_TRANSFER.PUT_FILE to transfer the file from source to destination host, the steps 3,4,5 should be changed by the following:
    1. Create a directory at target database host, and give permissions to local user. This is the directory object into which the file is placed at the destination site, it must exist in the remote file system:
      CREATE OR REPLACE DIRECTORY target_dir AS '+DGROUPA';
      GRANT WRITE ON DIRECTORY target_dir TO "USER";
    2. Create a directory at source database host. The directory object from which the file is copied at the local source site. This directory object must exist at the source site:
      CREATE OR REPLACE DIRECTORY source_dir AS '+DGROUPS/subdir';
      GRANT READ,WRITE ON DIRECTORY source_dir TO "USER";
      CREATE OR REPLACE DIRECTORY source_dir_1 AS '+DGROUPS/subdir/subdir_2';
    3. Create a dblink to connect to target database host:
      CREATE DATABASE LINK DBS2 CONNECT TO 'user' IDENTIFIED BY 'password' USING 'target_connect';

      where target_connect is the connect string for target database and USER is the user that we are going to use to transfer the datafiles.
    4. Connect to source instance. The following items are used:
      • dbs1: Connect string to source database
      • dbs2: dblink to target database
      • a1.dat: Filename at source database
      • a4.dat: Filename at target database
      CONNECT user/password@dbs1

      -- - put a1.dat to a4.dat (using dbs2 dblink)
      -- - level 2 sub dir to parent dir
      -- - user has read privs on source_dir_1 at dbs1 and write on target_dir
      -- - in dbs2
      BEGIN
          DBMS_FILE_TRANSFER.PUT_FILE('source_dir_1', 'a1.dat',
                                      'target_dir', 'a4.dat', 'dbs2' );
      END;

Community Discussion
You can directly participate in the Discussion about this article below. The Frame is the interactive live Discussion - not a Screenshot ;-)

REFERENCES

BUG:9816640 - ORA-600 [6200] ORA-600 [KDDUMMY_BLKCHK] IOT CORRUPTION CODE 6401 AFTER TTS
NOTE:13001379.8 - Bug 13001379 - Datapump transport_tablespaces produces wrong dictionary metadata causing errors like ORA-7445 [qeaeMinmax] / ORA-1858 etc...
NOTE:1334152.1 - ALERT: Corrupt IOT when using Transportable Tablespace to HP from different OS
NOTE:1324000.1 - ASMCMD Copy to Convert Endianess using RMAN fails in 11g
NOTE:1493809.1 - How to Move a Database Using Transportable Tablespaces
http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-platformmigrationtts-129269.pdf 

BUG:13001379 - DATAPUMP TRANSPORT_TABLESPACES PRODUCES WRONG METADATA FOR SOME TABLES
http://docs.oracle.com/cd/E16655_01/appdev.121/e17602/d_ftran.htm#CHDEFEGG 
NOTE:13636964.8 - Bug 13636964 - ORA-19563 from RMAN convert on datafile copy transferred with DBMS_FILE_TRANSFER

NOTE:1262965.1 - Dbms_file_transfer Corrupts Dbf File When Copying between endians
NOTE:1389592.1 - 11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup
NOTE:1989198.1 - TTS changed XMLTYPE from CLOB to Binary, XML Tag Syntax are Changed

Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (Doc ID 413484.1

What differences are allowed between a Primary Database and a Data Guard Physical Standby Database (Redo Apply)?

This note is updated for Redo Apply and Oracle Data Guard 12c.  It applies to all versions of Oracle Database 10g, 11g and Oracle Database 12c.

Would you like to explore this Topic further with other Oracle Customers, Oracle Employees and Industry Experts ??
You can discuss this Note, show your Experiences or ask Questions about it directly right at the Bottom of this Note in the Discussion Thread about this Document.
If you want to discover Discussions about other Articles und Subjects or even post new Discussions you can access the My Oracle Support Community Page for High Availability Data Guard

For information on supported configurations using Logical Standby (SQL Apply), see Support Note 1085687.1

Scope and Application: 

The simplest path when deploying Data Guard is to configure a homogeneous and symmetric primary/standby configuration.  However, it is often useful to deploy a heterogeneous configuration either to utilize existing servers that happen to run different operating systems or to facilitate migrations from one platform to another with minimal downtime or risk. It is also reasonable for users to wish to reduce their disaster recovery investment by purposely configuring a standby system with less processing capacity than production, or by utilizing lower cost components than used for their primary system. Use the instructions and information provided in this support note to determine which platform combinations are supported within a single Data Guard configuration and any additional requirements or restrictions that may apply.

If a heterogeneous primary/standby configuration is under consideration, Oracle recommends that users conduct sufficient testing to be sure that required service levels will continue to be achieved following a switchover or failover to the standby system.

1. Determine the Platform ID for your  primary and standby database. 

You can find the PLATFORM_ID inside the database in the V$DATABASE view using the query below:

SQL> select platform_id, platform_name from v$database;

PLATFORM_ID PLATFORM_NAME
----------- -----------------------------------
         10 Linux IA (32-bit)


Differences between the primary server(s) and the standby server(s) are always supported as long as the Oracle software installed on all servers is of the same Oracle Platform as defined above, is certified to run on each server, and is the same Oracle Database Release and Patch Set.  Examples of such differences that are supported include the following:
  • Hardware manufacturer (e.g. Dell and Sun or Hitachi and EMC)
  • Hardware configuration (e.g. number of CPUs, amount of RAM, storage configuration, etc)
  • Processor (e.g. x86-64 AMD64 and x86-64 Intel 64; POWER4 and POWER5) 
  • Operating system distribution (e.g. Red Hat Linux, SUSE Linux or Oracle Enterprise Linux)
  • Operating system version (e.g. Windows 2000 and Windows XP)

2. If Platform ID's for your primary and standby are different, check the table below to see if you have a supported configuration for Data Guard Redo Apply (Physical Standby)

In addition to general support when using the same Oracle platform, Data Guard Redo Apply (physical standby) can support specific mixed Oracle Platform combinations.  Oracle Platform IDs, platform names, and which combinations of platform ID(s) that can be combined to form a supported Data Guard configuration using Redo Apply are listed in the table below.  Platform combinations not listed in the table below are not supported using Data Guard Redo Apply.

Table Notes

  1. Prior to Data Guard 11g, the Data Guard Broker did not support different word-size in the same Data Guard configuration, thus requiring management from the SQL*Plus command line for mixed word-size Data Guard configurations.  This restriction is lifted from Data Guard 11g onward.
  2. Both primary and standby databases must be set at the same compatibility mode as the minimum release (if specified) in the table below.
  3. A standby database cannot be open read-only in any environment that has binary-level PL/SQL-related incompatibilities between primary and standby databases.  SupportNote:414043.1 is referenced in the table below for any platform combinations where this is the case (the note provides instructions for eliminating incompatibilities post role transition).  It is possible to access a standby database in such environments in Oracle Database 11g by temporarily converting it to a Snapshot Standby database, or in Oracle Database 10g by opening the standby read/write as described in the Data Guard 10g Concepts and Administration guide: Using a Physical Standby Database for Read/Write Testing and Reporting. Both procedures require following the steps in Note:414043.1 before making the database available to users.
  4. Please be sure to read Support Notes when referenced in the table below.
  5. RMAN generally supports instantiation of a physical standby database for the supported platform combinations. Please see Support Note 1079563.1 for details.
  6. Platforms in a supported combination may operate in either the primary or standby role.
  7. Enterprise Manager can not be used for standby database creation or other administrative functions in any configuration where PLATFORM_IDs are not identical. Oracle recommends using the Data Guard Broker command line interface (DGMGRL) to administer mixed platform combinations from Oracle Database 11g onward and SQL*Plus command line for configurations that pre-date Oracle Database 11g. 

  
PLATFORM_IDPLATFORM_NAME
Release name
PLATFORM_IDs supported within the same Data Guard configuration when using Data Guard Redo Apply (Physical Standby)
2Solaris[tm] OE (64-bit)
Solaris Operating System (SPARC) (64-bit)
2
6 - See Support Note: 1982638.1 and Note: 414043.1
3HP-UX (64-bit)
HP-UX PA-RISC
3
4 - Oracle 10g onward, see Support Note: 395982.1 and Note:414043.1
4HP-UX IA (64-bit)
HP-UX Itanium
4
3 - Oracle 10g onward, see Support Notes Note: 395982.1 and Note:414043.1
5HP Tru64 UNIX
HP Tru64 UNIX
5
6IBM AIX on POWER Systems (64-bit)2 - See Support Note: 1982638.1 and Note: 414043.1
6
7Microsoft Windows (32-bit)
Microsoft Windows (x86)
7
8, 12  - Oracle 10g onward, see Support Note: 414043.1
10 - Oracle 11g onward, requires Patch 13104881 --> Fix for 13104881 Included in 12.1
11, 13 - Oracle 11g onward, see Support Note: 414043.1, also requires Patch 13104881
8Microsoft Windows IA (64-bit)
Microsoft Windows (64-bit Itanium)
7 - Oracle 10g onward, see Support Note: 414043.1
8
12 - Oracle 10g onward
11, 13 - Oracle 11g onward, requires Patch 13104881
9IBM zSeries Based Linux
z/Linux
9
18 (64-bit zSeries only)
10Linux (32-bit)
Linux x86
7 - Oracle 11g onward, requires Patch 13104881
10
11, 13 - Oracle 10g onward, see Support Note: 414043.1
11Linux IA (64-bit)
Linux Itanium
10 - Oracle 10g onward, see Support Note: 414043.1
11
13 - Oracle 10g onward
7 - Oracle 11g onward, see Support Note: 414043.1, also requires Patch 13104881
8, 12 - Oracle 11g onward, requires Patch 13104881
12Microsoft Windows 64-bit for AMD
Microsoft Windows (x86-64)
7 - Oracle 10g onward, see Support Note Note: 414043.1
8 - Oracle 10g onward
12
11, 13 - Oracle 11g onward, requires Patch 13104881
13Linux 64-bit for AMD
Linux x86-64
7 - Oracle 11g onward, see Support Note: 414043.1, also requires Patch 13104881
10 - Oracle 10g onward, see Support Note Note: 414043.1
11 - Oracle 10g onward
8, 12 - Oracle 11g onward, requires Patch 13104881
13
20 - Oracle 11g onward
15HP Open VMS
HP OpenVMS Alpha
HP IA OpenVMS
OpenVMS Itanium
15
16Apple Mac OS
Mac OS X Server
16
17Solaris Operating System (x86)
Solaris Operating System (x86)
17
20 - Oracle 10g onward, see Support Note: 414043.1
18IBM Power Based Linux
Linux on Power
9 (64-bit zSeries only)
18
20Solaris Operating System (AMD64)
Solaris Operating System (x86-64)
13 - Oracle 11g onward
17 - Oracle 10g onward, see Support Note: 414043.1
20


3. Additional information: 

Transient Logical Database Rolling Upgrades: Beginning with Oracle Database 11.1.0.7, a physical standby database can be used to execute a rolling database upgrade to a new Oracle Patch Set or database release by using the transient logical rolling database upgrade process.  See the Maximum Availability Architecture Best Practice paper, " Rolling Database Upgrades for Physical Standby  Databases using Transient Logical Standby 11g".  The database rolling upgrade process enables a standby database to apply redo sent by a primary database that is operating at a previous Oracle release or patchset.  The transient logical rolling upgrade process requires that the primary and standby platform combination be a supported configuration for both Redo Apply (see table above) and SQL Apply (see Support Note 1085687.1) as of the pre-upgrade Oracle release deployed in the Data Guard configuration.
  

Data Guard Configurations that Include a Combination of Physical and Logical Standby Databases: A Data Guard configuration  includes a primary database and up to 30 standby databases. These standby databases may be a mix of physical and logical standby databases. All physical standby databases within a single Data Guard configuration must adhere to the requirements described in this note.  Likewise, if the configuration includes logical standby databases, they must conform to the requirements of Support Note 1085687.1.

Real Application Cluster & Automatic Storage Management: It is not necessary that the primary and the standby both be Oracle RAC databases, or both use ASM. For example, the primary database may be running Oracle RAC with or without ASM, and the standby database(s) may be single-instance, with or without ASM. Also, in case both the primary and standby are Oracle RAC databases, the number of Oracle RAC nodes between the primary and standby databases may vary. Furthermore, the versions of ASM and CRS do not need to be the same between the primary and standby systems.

Exadata Database Machine:    It is transparent to Data Guard whether primary and/or standby databases reside on an Exadata Database Machine or on other hardware, as long as the platform ID's of primary and standby systems within the same Data Guard configuration conform to the support requirements defined in the above table.  If Exadata Hybrid Columnar Compression (EHCC)  is used, it is strongly recommended that both primary and standby databases reside on Exadata. See the Maximum Availability Architecture Best Practice paper, "Disaster Recovery for Exadata Database Machine".

REFERENCES

NOTE:414043.1 - Role Transitions for Data Guard Configurations Using Mixed Oracle Binaries
BUG:12702521 - CANNOT SUPPORT SPARC<->AIX MIXED DATA GUARD DUE TO CONTROLFILE INCOMPATIBILITY
BUG:13104881 - ORA-600 [6101] DATA CORRUPTION IN 11.2.0.2 WINDOWS TO LINUX STANDBY DUPLICATION
NOTE:1079563.1 - RMAN DUPLICATE/RESTORE/RECOVER Mixed Platform Support
BUG:13104881 - ORA-600 [6101] DATA CORRUPTION IN 11.2.0.2 WINDOWS TO LINUX STANDBY DUPLICATION