Please find the below link to follow
Knowledge Sharing
Thursday, August 8, 2019
OAC: Downloading Snapshots Throwing Error: “A connection to the server has failed.(status=504)”
On Oracle Analytics Cloud (OAC) 18.2.1 version,
Unable to download the snapshot and failed with the following error - "A connection to the server has failed" as shown below
As a workaround, metadata command line utilities can be used to export the bar from source and import the bar on the target VM.
3. Copy this BAR file into the target OAC VM.
4. Restore the BAR on the target.
Unable to download the snapshot and failed with the following error - "A connection to the server has failed" as shown below
As a workaround, metadata command line utilities can be used to export the bar from source and import the bar on the target VM.
- Take a snapshot of the system as it exists now.
- Restore the snapshot that needs to be migrated.
Detailed Steps:
- SSH into the source OAC VM and run a command line utility to generate/export a BAR file.
- Use the below command to generate/export the snapshot bar file.
cd /bi/app/public/bin
Example:
./export_archive bootstrap /u01/app/oracle/tools/home/oracle/exportdir/ --loglevel DEBUG --logdir /u01/app/oracle/tools/home
This will prompt for a password to use in order to encrypt the BAR
file (The same as that of the bar file generation that occurs on
Analytics Console UI page)
4. Restore the BAR on the target.
cd /bi/app/public/bin
Example:
./import_archive bootstrap /u01/app/oracle/tools/home/oracle/exportdir/1536578078486/bootstrap.bar --loglevel DEBUG --logdir /u01/app/oracle/tools/home/oracle/logdir/
5. On the source, go back to SAC UI and restore to the backup snapshot created by step 1.
Example:
./import_archive bootstrap /u01/app/oracle/tools/home/oracle/exportdir/1536578078486/bootstrap.bar --loglevel DEBUG --logdir /u01/app/oracle/tools/home/oracle/logdir/
5. On the source, go back to SAC UI and restore to the backup snapshot created by step 1.
Friday, August 2, 2019
Timeout Issues : ODI
01. How to set up the Timeout for the graphical environment (ODI Studio)?
ODI12c
The
Oracle Data Integrator Timeout parameter (in seconds) is set in the ODI
12c Studio tool menu bar from ODI > Tools > Preferences > ODI
> System, as shown below:
Modifications of these "Parameters" only impact the local ODI Studio on which the modifications have been performed.
ODI 11g
The
Oracle Data Integrator Timeout parameter (in seconds) is set in the ODI
11g Studio tool menu bar from ODI > User Parameters > Property
"Oracle Data Integrator Timeout".
The default value is 30 seconds.
02. How to set up the Timeout for the Agents (standalone Agent, colocated Agent, and J2EE Agent)?
The
Oracle Data Integrator Timeout of any ODI 12c Agent might be configured
from ODI Studio > Topology, by setting the desired number of seconds
into the "JDBC connection timeout" property from the physical Agent > "Properties" tab, as shown below:
03. Additional settings for the J2EE Agent
In the WLS Admin Console, go to the deployment of the ODI J2EE Agent, and modify the Session Timeout field to the desired number of seconds.
Additionally,
the connection timeout, and the number of retries to establish the
connection can also be increased by configuring the JDBC Data Source
parameters in WLS Admin Console.
Go to Services > Data Sources entry. Edit the desired Data Source (odiMasterRepository, odiWorkRepository, etc). Go to the Configuration > Connection Pool tab. Click on the "Advanced" link at the bottom of page, and set the Connection Creation Retry Frequency, Login Delay, and Inactive Connection Timeout fields to the desired values.
Also notice the oracle.net.CONNECT_TIMEOUT property (milli-seconds) in the Connection Pool tab > "Properties" text box.
Go to Services > Data Sources entry. Edit the desired Data Source (odiMasterRepository, odiWorkRepository, etc). Go to the Configuration > Connection Pool tab. Click on the "Advanced" link at the bottom of page, and set the Connection Creation Retry Frequency, Login Delay, and Inactive Connection Timeout fields to the desired values.
Also notice the oracle.net.CONNECT_TIMEOUT property (milli-seconds) in the Connection Pool tab > "Properties" text box.
Java heap space error while importing the custom mappings into test environment: : ODI 11g/12c
We are trying to import (smart import) our
custom mappings into the test environment and we are getting the java heap
space error.
Syntax for setting the Java properties
- ODI configuration files: product.conf, odi.conf and ide.conf
Increase it to below values in the mentioned files:
- ODI command files: setODIDomainEnv.cmd, and setODIDomainEnv.sh
Migrating BIAPPS Configration : BIACM (Data Load Parameters, Domains and Mappings, Reporting Parameters)
The migration of functional setup data (Data Load Parameters, Domains and Mappings,
Reporting Parameters) is performed by an export of the setup data from Configuration Manager
in the source environment and then an import of the data into Configuration Manger in the
target environment.
To migrate functional setup data:
1 Log into Configuration Manager in the source environment (for example, development)
as a user with BI Applications Administrator privileges. Navigate to Setup Data Export
and Import: Export Setup Data using the left hand Tasks pane.
2 From the Table tool bar on the Export Setup Data page, click on the Export icon to
display the New Data Entry Dialog: Export. In the Export dialog:
a. Provide a meaningful name for the export file name
b. Select the following objects to export:
· Data Load Parameters
· Domains and Mappings
· Reporting Parameters
NOTE: Do not select System Setups. System setups have already been completed as part of Step 1 Creating a Target Test or Production Environment .
8. Click OK to import the functional setup data into the target Configuration Manager. The
Import table is updated with details of the import.
Reporting Parameters) is performed by an export of the setup data from Configuration Manager
in the source environment and then an import of the data into Configuration Manger in the
target environment.
To migrate functional setup data:
1 Log into Configuration Manager in the source environment (for example, development)
as a user with BI Applications Administrator privileges. Navigate to Setup Data Export
and Import: Export Setup Data using the left hand Tasks pane.
2 From the Table tool bar on the Export Setup Data page, click on the Export icon to
display the New Data Entry Dialog: Export. In the Export dialog:
a. Provide a meaningful name for the export file name
b. Select the following objects to export:
· Data Load Parameters
· Domains and Mappings
· Reporting Parameters
NOTE: Do not select System Setups. System setups have already been completed as part of Step 1 Creating a Target Test or Production Environment .
3. Click on the Export button. In the File Dow
nload dialog, click Save to save the ZIP file to location that you specify.
4. Copy the ZIP file exported in step 3 above to a file location that is accessible from the
machine that will run the target Configuration Manager browser window.
5. Log into Configuration Manager in the target environment (for example, test) as a user
with BI Applications Administrator privileges. Navigate to Setup Data Export and Import:
Import Setup Data using the left hand Tasks pane.
6. From the Table tool bar on the Import Setup Data page, click on the Import data icon to
display the New Data Entry Dialog: Import Data.
7. In the Import Data dialog, browse to locate the ZIP file copied to the location in step 4
above.
8. Click OK to import the functional setup data into the target Configuration Manager. The
Import table is updated with details of the import.
ODI Load Plan got stuck at Index Creation Step: Oracle BIAPPS 11g
The reason why it is getting stuck is because of the PARALLEL_LEVEL parameter value passed greater than 1 in (DBMS_PARALLEL_EXECUTE.RUN_TASK ) DBMS package.
Sample parameters passed as below :
There are two possible reason when your load plan will get stuck up.
01. If the below DB parameter is 0 and parallel_level => {more than 1}, then your load plan will get stuck.
job_queue_processes integer 0
02. If your dbms_scheduler was disabled, kindly enable the same.
Execution of PL/SQL package (DBMS_PARALLEL_EXECUTE.RUN_TASK) with below parameters:
Sample parameters passed as below :
DBMS_PARALLEL_EXECUTE.RUN_TASK('CREATE_INDEXES_17813500',
l_sql_stmt, DBMS_SQL.NATIVE, parallel_level => 2) ;
ODI Load Plan Screenshot:
01. If the below DB parameter is 0 and parallel_level => {more than 1}, then your load plan will get stuck.
SQL> show parameter
job_queue_processes
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
02. If your dbms_scheduler was disabled, kindly enable the same.
Execution of PL/SQL package (DBMS_PARALLEL_EXECUTE.RUN_TASK) with below parameters:
Thursday, May 11, 2017
Schema on Write(Traditional Databases) vs Schema on Read (Hadoop)
The fundamental difference while comparing Traditional Databases viz Oracle,SQL Server,DB2 with Hadoop is Schema on Write vs Schema on Read.
Schema on Write
The steps as below :
Step1 : The first step here is create schema i.e. define Table Structure. For Example:
Schema on Write
The steps as below :
Step1 : The first step here is create schema i.e. define Table Structure. For Example:
CREATE
TABLE EMP
(
Ename
STRING,
EmpID
INT,
Salary
FLOAT,...)
Step2 : Once the table exists then only we can load data to it. For Example: Bulk load data into EMP table from emp.txt file
BULK INSERT EMP
FROM 'C:\EMPDATA\emp.txt'
WHERE FILELDTERMINATOR= ","
Step3 : Once the data is loaded we can query the data using SELECT statement. For Example :
SELECT Ename,Salary,... FROM EMP;
The above three steps demonstrate the schema on write which our traditional databases possesses. It is important to note here that we can't add data to the table unless the schema has been declared.
If the data changes for a given column say data-type of that very column changes from INT to VARCHAR2 or a new column has been added to the table, then whole data need to be deleted for the column and need to be re-loaded. This holds good when we have small set of data or we do not have the foreign keys. But when we have terabytes of data and foreign key existing in the table then it will really be a challenging problem.
Hadoop or any other big data technologies generally use Schema on Read. Schema on Read follow the different sequence.
Schema on Read
Step1 : Load the data on hadoop cluster.
hdfs dfs -CopyFromLocal /tmp/EMP.txt /usr/hadoop/emp
Step2: Query the data using pyton script or hive command or by any other means. For Example:
hive> SELECT * from EMP; OR
hadoop jar Hadoop-Emp.jar -mapper emp-map.py -reducer emp-red.py -input /usr/hadoop/emp/*.txt -output /usr/hadoop/output/query1A
Here, the data structure is interpreted as it is read through python script or hive command as shown above. If the column is added to the table or datatype of a column got changed we can adjust the script to read the data. We do not need to reload the whole data.
Let us understand the above theory with the help of below example :
Consider we have a USER table where in two columns are there, namely NAME and AGE with the sample data shown.
When we try to write the sample data shown in USER table in traditional database, it will throw error because NAME is a varchar column and we are trying to insert integer data to it. Similarly, AGE is an integer column and we are trying to insert XYZ (VARCHAR) data to it. Hence, the schema is verified while writing the data.Therefore, traditional database has total control over the storage. This gives ability to database to enforce schema as data is written. This is called as Schema on Write.
When comes to Hadoop or any big data technologies, it does not have any control over storage.When we try to load the above data into a hive or HDFS table, the loading will be successful. While reading the data, HIVE will verify the schema. As 123 is integer and 'XYZ' is varchar, NULL value will be displayed for the NAME and AGE fields for the values as 123 and 'XYZ' respectively. The data will be verified while querying the data and hence Schema on Read. Therefore, when data is loaded schema is not verified in Hadoop or big data technologies while as schema check happens while reading the data.
Subscribe to:
Posts (Atom)