Wednesday, January 23, 2008

Optimizing RAC performance for SOA - Part 2

As promised in part 1, here is the ddl for using non-assm blob storage with cube_scope. There are assumptions here about the use of ASM for data storage and the existence of some pre-created tablespaces for indexes, etc. so your mileage may vary. The purpose is to illustrate how a manually-managed tablespace is created and used.


I should have similar ddl for the other tables with similar issues in a future post.


CREATE TABLESPACE "CUBE_SCOPE_LOB"
DATAFILE '+DATA/soa/datafile/cslob.dbf'
SIZE 2560M REUSE AUTOEXTEND ON NEXT 512M MAXSIZE 5120M
NOLOGGING EXTENT MANAGEMENT LOCAL UNIFORM SIZE 32768K
SEGMENT SPACE MANAGEMENT MANUAL;

CREATE TABLE orabpel.CUBE_SCOPE
(
CIKEY INTEGER,
DOMAIN_REF INTEGER,
MODIFY_DATE TIMESTAMP(6),
SCOPE_BIN BLOB
)
TABLESPACE ORABPEL
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 32M
PCTINCREASE 0
FREELISTS 6
FREELIST GROUPS 5
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
LOB (SCOPE_BIN) STORE AS
(
TABLESPACE CUBE_SCOPE_LOB
DISABLE STORAGE IN ROW
CHUNK 8192
PCTVERSION 0
CACHE
STORAGE (
INITIAL 32M
NEXT 32M
PCTINCREASE 0
freelists 6
freelist group 5
BUFFER_POOL DEFAULT
)
)
NOCACHE
NOPARALLEL
MONITORING;

CREATE UNIQUE INDEX CS_PK ON CUBE_SCOPE(CIKEY)
LOGGING
TABLESPACE BPELIDX
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
NOPARALLEL;

2 comments:

Unknown said...

If I do non-ASSM tablespace with you ddl, I am getting the below message. We have similar issues in our dehydration store DB

ORA-32772: BIGFILE is invalid option for this type of tablespace

Sean Carey said...

That's strange. I'm not specifying a bigfile tablespace here so it might be the default in your server. I think you can try specifying smallfile in the tablespace creation statement.