You are viewing revision #2 of this wiki article.
This version may not be up to date with the latest version.
You may want to view the differences to the latest version or see the changes made in this revision.
Please, note. This article is entirely based on a work done by me colleague developer, who hasn't got account on this webpage and also hasn't got time for writing this article. I'm writing it for him. I will try to forward him any comments that may appear, but can't assure if/when he'll be able to answer. I'm far, far less experienced in Oracle therefore I won't probably be able to help myself.
Current situation ¶
An approach currently presented in COciSchema - i.e. basing on all_objects, ALL_TAB_COLUMNS and ALL_CONS_COLUMNS is probably correct solution, but very, very slow, when dealing with Oracle databases. Experiments proved that in some rare situations (very slow, not localhost server), schema analysing query based on above may run more than one minute (up to even twenty minutes in a very separate cases), which causes for example Gii to end up permanently with timeout error message, if PHP script execution time is to low. This situation was presented for example in this post.
This approach works very well in other RDBMS (like MySQL) as they are properly optimised for such queries. Unfortunately, fails on Oracle setup on a slow server.
Possible solution ¶
One of possible solutions is to introduce new set of tables that will hold information on tables and views used in a project. This is a kind of caching on database side.
The idea is following. We are taking original SQL query from original COciSchema, which is used to get tables schema. It is fired upon each AR use and since it is one of slowest queries in Oracle, it makes AR nearly unusable. Instead of calling this slow query every time, we are firing it only when real table schema changes. And instead of returning results, we are putting them into our set of tables, used for caching real tables schema. Then we change COciSchema so it reads tables schema from that so called technical-tables. This speeds up whole process significantly as Active Record is querying normal table, which is done very fast.
This looks pretty much like using schemaCachingDuration property, but has some advantages:
- in most situations, caching in database is faster than using caching component (see results at the end),
- using schemaCachingDuration not always works in Oracle - i.e. does not provides noticeable performance upgrade,
- on some slow servers using proposed solutions enables using Gii, while without it this tool is unusable (see here).
Building caching tables ¶
Here are an example SQL statements that builds up these tables:
[sql]
create table WWW_TAB_COLUMNS
(
column_id NUMBER(10) not null,
table_name VARCHAR2(50) not null,
column_name VARCHAR2(50) not null,
data_type VARCHAR2(100) not null,
nullable CHAR(1),
data_default VARCHAR2(100),
key CHAR(1)
);
comment on column WWW_TAB_COLUMNS.nullable is 'Y/N';
comment on column WWW_TAB_COLUMNS.key is 'P/NULL';
alter table WWW_TAB_COLUMNS add constraint PK_WWW_TAB_COLUMNS primary
key (TABLE_NAME, COLUMN_NAME);
create table WWW_TABLES
(
table_name VARCHAR2(50) not null
);
alter table WWW_TABLES add constraint PK_WWW_TABLES primary key (TABLE_NAME);
create table WWW_TAB_CONS
(
table_name VARCHAR2(50) not null,
column_name VARCHAR2(50) not null,
position NUMBER(10) not null,
r_constraint_name VARCHAR2(50) not null,
table_ref VARCHAR2(50) not null,
column_ref VARCHAR2(200) not null
);
alter table WWW_TAB_CONS add constraint PK_WWW_TAB_CONS primary key
(TABLE_NAME, COLUMN_NAME);
alter table WWW_TAB_CONS add constraint FK_WWW_TAB_CONS foreign key
(TABLE_NAME) references WWW_TABLES (TABLE_NAME);
Feeding caching tables ¶
After creating proper tables and modifying COCiSchema class definition, you can insert any table schema into created set of tables. Here is an exemplary SQL statement for this. As it was stated eariler, it is pretty much the same as the one originally used in COciSchema.
SQL for retrieving columns data:
SELECT a.column_name, a.data_type ||
case
when data_precision is not null
then '(' || a.data_precision ||
case when a.data_scale > 0 then ',' ||
a.data_scale else '' end
|| ')'
when data_type = 'DATE' then ''
else '(' || to_char(a.data_length) || ')'
end as data_type,
a.nullable, a.data_default,
( SELECT D.constraint_type
FROM user_CONS_COLUMNS C
inner join user_constraints D On D.constraint_name = C.constraint_name
Where C.table_name = A.TABLE_NAME
and C.column_name = A.column_name
and D.constraint_type = 'P') as Key
FROM user_TAB_COLUMNS A
WHERE
A.TABLE_NAME = 'NAZWA_TABELI'
ORDER by a.column_id
SQL for retrieving references:
SELECT D.constraint_type, C.COLUMN_NAME, C.position, D.r_constraint_name,
E.table_name as table_ref, f.column_name as column_ref
FROM ALL_CONS_COLUMNS C
inner join ALL_constraints D on D.OWNER = C.OWNER and
D.constraint_name = C.constraint_name
left join ALL_constraints E on E.OWNER = D.r_OWNER and
E.constraint_name = D.r_constraint_name
left join ALL_cons_columns F on F.OWNER = E.OWNER and
F.constraint_name = E.constraint_name and F.position = c.position
WHERE C.OWNER = '{$schemaName}'
and C.table_name = '{$name}'
and D.constraint_type = 'R'
order by d.constraint_name, c.position
This step is essential as base of speeding up Active Record with presented solution is to change it, so it will read table schema from newly created set of tables, instead of querying Oracle each time for table schema (this is one of the slowest operations in Oracle). Therefore, any change in real table schema must be reflected in above set of tables.
Modifying COciSchema.php ¶
After filling set of tables with tables schema, you need to alter some parts of COciSchema class code, by either overwriting original one or creating own extension basing on it:
protected function findColumns($table)
{
list($schemaName,$tableName) = $this->getSchemaTableName($table->name);
$sql=<<<EOD
SELECT Upper(COLUMN_NAME) as COLUMN_NAME, Upper(DATA_TYPE) as DATA_TYPE, NULLABLE, DATA_DEFAULT, KEY
FROM www_tab_columns
Where Upper(table_name) = Upper('{$tableName}')
ORDER by column_id
EOD;
$command=$this->getDbConnection()->createCommand($sql);
if(($columns=$command->queryAll())===array()){
return false;
}
foreach($columns as $column)
{
$c=$this->createColumn($column);
$table->columns[$c->name]=$c;
if($c->isPrimaryKey)
{
if($table->primaryKey===null)
$table->primaryKey=$c->name;
else if(is_string($table->primaryKey))
$table->primaryKey=array($table->primaryKey,$c->name);
else
$table->primaryKey[]=$c->name;
/*if(strpos(strtolower($column['Extra']),'auto_increment')!==false)
$table->sequenceName='';*/
}
}
return true;
}
protected function findConstraints($table)
{
$sql=<<<EOD
SELECT upper(COLUMN_NAME) As COLUMN_NAME, upper(TABLE_REF) As TABLE_REF, upper(COLUMN_REF) As COLUMN_REF
FROM WWW_TAB_CONS
WHERE upper(TABLE_NAME) = upper('{$table->name}')
Order By POSITION
EOD;
$command=$this->getDbConnection()->createCommand($sql);
foreach($command->queryAll() as $row)
{
$name = $row["COLUMN_NAME"];
$table->foreignKeys[$name]=array($row["TABLE_REF"], array($row["COLUMN_REF"]));
if(isset($table->columns[$name]))
$table->columns[$name]->isForeignKey=true;
}
}
protected function findTableNames($schema='')
{
$sql='SELECT upper(table_name) as TABLE_NAME FROM www_tables';
$command=$this->getDbConnection()->createCommand($sql);
$rows=$command->queryAll();
$names=array();
foreach($rows as $row)
{
$names[]=$row['TABLE_NAME'];
}
return $names;
}
Final words ¶
This is one of possible solution. It was designed for our own project and therefore might not satisfy other developers needs or may need some slight changes. But, on the other hand, we were able to speed up Active Record ten times. For example selection of 300 records (160 columns) took 0,24 second, while before (on original Yii-build in COciSchema) the same query was taking around 2,45 second. Both test without using any caching component.
Preformance vs. parameters binding
@sx9: Do you ask, if just adding parameters binding will speed up your queries (not using above solution at all) or you we tested our solution with parameters binding?
Answer to the second question is that this solution should speed up any query passed to Oracle database? Parameters binding is done on PHP side (in either Yii or PDO) and the result is a query quite similar to the one not using parameters biding. I.e. If you are using it or not, it should not have influence for speed, if using our solution.
For first question: I can't answer, if using parameters biding is faster than not using it, because I haven't got opportunity to test it and because this is quite not related to this article! :)
From the creator of OCI support for Yii
Sorry, but this article is unnecessary.
Just use cache - Yes it works (ALWAYS works!). Maybe you are misusing cache...
Cache will make queries EVEN faster than yours (The metadata will only be requested once and cached. Next queries will not request the metadata anymore).
Also Gii works - Yes , sometimes it fails the first time, but with cache, next attempts will work very well.
Cache?
@rickgrana: First of all -- I'm not working on a project, where I used Oracle, any more. I'm not pretty sure right now, but the problem was that we're not able to find a good cache component for PHP running under Windows and Linux (we were developing a localhost application to be deployed on many customers' servers).
Second of all -- Using cache was not a good idea at all in my project. It was meant to handle hundreds (if not thousands) of small SQL queries, ex-changing millions of a small chunks of data per second. Data changing very dynamically every second. If I'm not mistaken, using a cache in such project would only slow down performance.
I was told many times that cache isn't a best solution always, in every project.
@trejder
I use CFileCache. It works well in both Windows and Linux. I still can delete the cache files when I think necessary.
Using many queries and retrieving many data is not a problem. The metadata querie is just made once per table.
Caching metadata is always a good idea. But not caching resultsets. Yes, when your data changes very often, caching is unnecessary and slow down performance, as you told. But this will happen even on MySQl or Postgres.
A simple optimization does make a difference
Actually, after a quite simple optimization to COciSchema->findColumns() I managed to reduce time it takes to read in the metadata by about 50%.
For some reason the existing query uses a scalar subquery to determine whether a column belongs to the primary key, which is inefficient compared to an outer join.
Could anyone please update the method to use the following, completely equivalent, query, which makes a significant improvement with regard to execution time?
[sql] select tc.column_name, tc.data_type|| case when data_precision is not null then '('||tc.data_precision||case when tc.data_scale>0 then ','||tc.data_scale else '' end||')' when data_type='DATE' then '' when data_type='NUMBER' then '' else '('||To_Char(tc.data_length)||')' end as data_type, tc.nullable, tc.data_default, Nvl2(cc.position, 'P', null) as key from all_tab_columns tc inner join all_objects ao on ao.owner=tc.owner and ao.object_name=tc.table_name left join all_constraints cs on cs.owner=ao.owner and cs.table_name=ao.object_name and cs.constraint_type='P' left join all_cons_columns cc on cc.owner=cs.owner and cc.table_name=cs.table_name and cc.constraint_name=cs.constraint_name and cc.column_name=tc.column_name where ao.owner='{$schemaName}' and ao.object_type in ('TABLE', 'VIEW') and ao.object_name='{$tableName}' order by tc.column_id
.
@yktoo - you could create github ticket and even push change request with your patch...
@trejder - about param binding in PDO OCI - there is a difference between named parameters which are passed to Oracle engine with OCIBindByName internally and query without params. Also Oracle query runner can cache query execution plans with named params and reuse them which should speed up running similar queries. So using params is rather recommended...
thanks to yktoo
@yktoo
Thanks for your suggested changes in findColumns() method. It works great atleast for me.I have enabled CApcCache for my application and did the changes suggested by yktoo.
Performance was far better then previous one.
Is it going to be updated in next release of yii.
.
@codesutra
My pull request was rejected because the speed gain was [reportedly] not consistent. But anyway, you can always override it in your app.
@yktoo
Ahhh. no prob :) thanks again to you. your code really did a trick there..;)
Insert queries for Managing caching tables
Hi,
You can use the following query to insert the into the caching tables. Just change the into your table. This SQL Script will automatically insert all the data needed for the caching tables.
--INSERT INTO TABLES INSERT INTO WWW_TABLES (TABLE_NAME) VALUES ('<YOUR_TABLE_NAME>'); --INSERT QUERY FOR ORACLE SCHEMA CATCHING INSERT INTO WWW_TAB_COLUMNS (COLUMN_ID,TABLE_NAME, COLUMN_NAME, DATA_TYPE, NULLABLE, "KEY") SELECT A.COLUMN_ID,'<YOUR_TABLE_NAME>' AS TABLE_NAME, a.column_name, a.data_type || case when data_precision is not null then '(' || a.data_precision || case when a.data_scale > 0 then ',' || a.data_scale else '' end || ')' when data_type = 'DATE' then '' else '(' || to_char(a.data_length) || ')' end as data_type, a.nullable, (SELECT D.constraint_type FROM user_CONS_COLUMNS C inner join user_constraints D On D.constraint_name = C.constraint_name Where C.table_name = A.TABLE_NAME and C.column_name = A.column_name and D.constraint_type = 'P') as Key FROM user_TAB_COLUMNS A WHERE A.TABLE_NAME = '<YOUR_TABLE_NAME>' ORDER by a.column_id; -- INSERT IF ANY RELATION AVAILABLE BETWEEN THE TABLES INSERT INTO WWW_TAB_CONS (TABLE_NAME,COLUMN_NAME,POSITION,R_CONSTRAINT_NAME,TABLE_REF,COLUMN_REF) SELECT '<YOUR_TABLE_NAME>' AS TABLE_NAME, C.COLUMN_NAME, C.position, D.r_constraint_name, E.table_name as table_ref, f.column_name as column_ref FROM ALL_CONS_COLUMNS C inner join ALL_constraints D on D.OWNER = C.OWNER and D.constraint_name = C.constraint_name left join ALL_constraints E on E.OWNER = D.r_OWNER and E.constraint_name = D.r_constraint_name left join ALL_cons_columns F on F.OWNER = E.OWNER and F.constraint_name = E.constraint_name and F.position = c.position WHERE C.table_name = '<YOUR_TABLE_NAME>' and D.constraint_type = 'R' order by d.constraint_name, c.position;
Hope this will help you all!
Thanks in advance
Cheers!
Error Undefined index: COLUMN_COMMENT
if someone receive the error "Undefined index: COLUMN_COMMENT", just change the line 237 from class COciSchema to:
$c->comment = !isset($column['COLUMN_COMMENT']) ? '' : $column['COLUMN_COMMENT'];
Extending the COciSchema
To customize the COciSchema's methods preserving the original class, just change the drivermap in your database component:
'driverMap' => array( 'oci' => 'application.components.MyOciSchema', ),
Optimalization of findConstraints SQL
My advice is to override query in COciSchema->findConstraints(), which is simply inefficient. Using WITH clause I could reduce execution time by 80% in this case.
[sql] WITH tcc AS (SELECT con.owner, con.constraint_name, con.constraint_type, con.table_name, con.r_owner, con.r_constraint_name, col.column_name, col.position FROM all_constraints con, all_cons_columns col WHERE con.owner = col.owner AND con.constraint_name = col.constraint_name) SELECT t1.constraint_type AS constraint_type, t1.column_name, t1.position, t1.r_constraint_name, t2.table_name AS table_ref, t2.column_name AS column_ref, t1.table_name FROM tcc t1 LEFT JOIN tcc t2 ON t2.constraint_name = t1.r_constraint_name AND t2.owner = t1.r_owner AND t2.position = t1.position WHERE t1.constraint_type <> 'P' AND t1.owner = '{$table->schemaName}' AND t1.table_name = '{$table->name}' ORDER BY t1.constraint_name, t1.position
my solution
I override COciSchema->findConstraints() with the solution of Nashi (thanks!) and also in the config/main.php:
in component:
'db' => array( //30 days 'schemaCachingDuration' =>24*3600*30, 'enableParamLogging'=>true, 'enableProfiling'=>true, 'emulatePrepare' => true, 'driverMap' => array( 'oci' => 'application.components.MyOciSchema', ), ) 'cache'=>array( 'class'=>'system.caching.CDbCache' ),
and in the home page or in a cron script call the function
in this way i have all schema cached for a long time and the performance are optimal. The approach is similar but I avoided to create the tables on db with automatic populate.
If you have any questions, please ask in the forum instead.
Signup or Login in order to comment.