SAP BW Notes

My notes while studying for the certification exams :)

Friday, April 27, 2007

Notes I

Important points to remember for the certification:

  1. Infoobject Maintenance

  • Attribute only’ flag in RSD1

If you mark the characteristic as an exclusive attribute, it can only be used as a display attribute for another characteristic, but not as a navigational attribute. In addition, the characteristic cannot be transferred into InfoCubes. But it can have master data.


  • Global Transfer routine

If this indicator is set, a transfer routine is defined for the InfoObject. This routine is integrated into all transfer rules where the InfoObject is contained in the communication structure. With data transfers, the logic contained in the individual transfer rules (from transfer structure to communication structure) runs first. The transfer routine is then carried out on the value of the corresponding field in the communication structure for every InfoObject that has a transfer routine and is contained in the communication structure.

In the transfer routine, you can define DataSource-independent code that you only have to maintain once, but is valid for all transfer rules.


  • Filter selection values


Selection of Filter Values for Query Definition

This field describes how the selection of filter values and the restriction of characteristics functions when you define queries.

The values from the master data table are normally displayed when you restrict characteristics. For characteristics without a master data table, the values from the SID table are displayed instead. In a number of cases, however, it could be better to display only the values that are in the InfoProvider. The setting "InfoProvider Values Only" is permitted for this reason.


Selection of Filter Values for Query Execution

This field describes how the selection of filter values functions when a query is executed.

Normally the values from the data selected by the query are defined when you select filter values as you execute the query. Only the values for which data was selected for the current navigation status are displayed.

In a number of cases it can make sense to permit additional values as well. The settings "InfoProvider Values Only" and "Master Data Table Values" are allowed for this reason. If you make this selection, the message "No Data Found" may appear after selecting filter values.


  • Master Data Tables Created in the DDIC


As seen above the following tables are created

P Table – This table is created for the time independent attributes. The important fields of this table are as shown below

The key is comprised of the characteristic value, compounding characteristics (if any), OBJVERS (which can be A – active or M- modified), CHANGED (which can be D – deleted or I – to be inserted)


Q Table – For time independent attributes. Similar to the P table, but with an addition of Date To in the key.


X Table – This table is for the time independent navigational attributes. The important fields are as shown below

The key is comprised of the SID of the characteristic value, Compounding characteristics (if any) and then the SIDs of the navigational attributes.


Y Table – This is for the time dependent navigational attributes. The structure is similar to the X table above, with the addition of the Date to field in the key.


  • Hierarchy Tables


H Table

This table contains the structure of the hierarchy. The fields of the table are as shown


K Table - SID table for hierarchies

Hierarchy nodes (like characteristic values) are not stored with actual values, but with master data IDs (SIDs) in aggregates. The conversion of hierarchy nodes into SIDs is stored in this table. Contrary to characteristic values, hierarchy nodes get negative SIDs.


I Table - SID Structure of the hierarchy

With the aid of SIDs from characteristic values and hierarchy nodes, the hierarchy structure is stored in this table. In principle, this table saved the same information as the hierarchy table.

Since only SIDs and no characteristic values, or hierarchy nodes can be stored in aggregates and InfoCubes, the analysis of the data using this table is more effective and is quicker.


J Table - Hierarchy interval table (of a characteristic)

In this table information is stored on hierarchy nodes that are intervals.


  • Change from display to navigational attribute after master data load

Once master data is loaded, a display attribute can still be converted into a navigational attribute, but not vice versa. The general rule is if the data structure would be enhanced, then the change can be done, but if the fields of the data structure would need to be deleted (for e.g. converting from a time dependent display attribute to time independent) then this is not possible without deleting the master data.


  • ODS Object for Checking

If an ODS object is stored for a characteristic in order to check the characteristic values, the valid characteristic values are determinedfrom the ODS object and not from the master data of the characteristic in the update and transfer rules. The ODS object must havethe characteristic itself and all the fields from compounding as key fields.


  • Non Cumulative Key Figures:

Like warehouse stock for e.g. in such a case, the stock changes with respect to time. Hence there is a particular stock value at a particular point in time, though the values cannot be aggregated meaningfully.


Hence we can use Non-cumulative Key Figures to model such key figures.

There are two possibilities using Non Cumulative key figures:

  1. Non cumulative with non cumulative change

In the above case, once the intialization is done, and then the changes are uploaded into the cube. By default for the non-cumulative key figures, the aggregation is set to SUM. As shown below in such a case, the Non-cumulative change ZNBASEQTY is the key figure which actually stores the changes. When the non cumulative key figure is included in an infocube for e.g then actually the change key figure is included and the non-cumulative kf is calculated at olap run time.

  1. Non cumulative with inflow and outflow

Once the initialization is done, only the inflow and outflow are uploaded into the cube.

Inflow is the positive qty, and the outflow is the negative qty.

For more information please also see the link below:

http://help.sap.com/saphelp_nw04/helpdata/en/80/1a62dee07211d2acb80000e829fbfe/frameset.htm


While scheduling the infopackage, you need to select the ‘Opening Bal’ for the initial upload of the data.

If you select Initialization run for non-cumulatives, the InfoSource for constructing a non-cumulative is brought into use. If you remove the selection, the structure may not contain any non-cumulative values.


  1. Infocube

  • Important Numbers

13 possible dimensions, in addition to the default dimensions of Package, Units and Time. Each dimension can contain 248 characteristics. Maximum key figures possible are 233 in the infocube.


  • High Cardinality

While defining dimensions, if the ‘Cardinality Height’ indicator is flagged, when the characteristics may have a large number of unique values, leading to the dimension size of around 10 to 20 % of the fact table, B tree indices are created instead of Bitmap indices.


  • Transactional Infocube

A transactional InfoCube is a special BasisCube, especially developed for Strategic Enterprise Management (SAP SEM). The system accesses data in such a cube transactionally, in other

words, data is written to the InfoCube (possibly from more than one user at the same time) and instantantly read again when required. Standard BasisCubes are not suitable here.


  1. Data Extraction

Once the datasource is activated in RSA6, and then replicated in BW, the transfer structure is still not created. The transfer structure is created only when the transfer rules are activated. When this is done, the transfer structure is created in both the BW system as well as the source system. The transfer structure is similar to the extrat structure, but does not contain the fields which are hidden in the extract structure.


Extract Structure

As seen below, hidden fields are not a part of the transfer structure.



  • Transfer rules are source system specific, while update rules are specific to the data targets.


  • Referential Integrity Flag in Infosource Maintainance

If you set this flag a check for referential integrity is performed for this InfoObject, against the master data table or ODS object. The InfoObject is checked for valid characteristic values.

Dependencies in the InfoObject you can set which object (master data table or ODS object) the InfoObject is to be checked against.


Prerequisites for checking for referential integrity:

Data is updated flexibly

You have activated error handling in the BW scheduler (Tab page Update)

On the Update tab page in the BW scheduler, you have selected the option Always update data, even when no master data exists for the data. If the other option is selected, then the referential integrity check is overridden.


If there is no SID for the characteristic values uploaded, then the following error may crop up during extraction.


  • Infopackage update settings


  1. Always update data even if no master data exists for the data.

In this case, data is uploaded even if there is no master data maintained for the characteristic values. The SIDs are generated while uploading the data. The master data can be uploaded afterwards.


  1. Do not update data if no master data exists for a characteristic.

In this case, if no SIDs exist for the characteristic values in the master data, then the extraction terminates with an error. For e.g. see below


  • Error Handling

This is the behaviour when errors during extraction


  • Repair Request



Indicate Full Request as Repair Request

If you indicate a request in full update mode as a repair request, then it is able to be updated in all data targets. This is also true if they already contain data from initial runs or deltas for this DataSource / source system combination, and they have overlapping selections.

Consequently, a repair request can be updated at any time without checking each ODS object. The system supports loading in an ODS object by using the repair request without having to check the data for overlapping or request sequencing. This is because you can also delete selectively without checking an ODS object.


Note

Posting such requests can lead to duplicate data records in the data target.


  • Conversion Routine in Infopackage data selection

Use Conversion Exit

If you set this indicator, the conversion exits are used (if available) when you maintain selection criteria. This means that the field values are displayed in external format and also have to be entered in external format.

If you do not set this indicator, the field values are shown in internal format in the selection criteria maintenance and you also have to enter them in internal format.



  • Start Routine in the Transfer rules

In the start routine, the table DATAPAK of type TAB_TRANSTRU which contains the records extracted. These records can be processed in the start routine. Abort <> 0 will skip this datapackage.


  • Transfer routine in the transfer rules


As seen above for the transfer routine for the characteristic ZORDTYPE, the entire transfer structure is available in the TRANS_STRUCTURE, RECORD_NO gives the current record no in the data package. The result takes the computted value of ZORDTYPE.


  • Start routine in the Update Rules

As seen above, the entire DATA_PACKAGE is available in the form of an internal table, where the calculations can be carried out. Number of records in the RECORD_ALL parameter. If the Abort <> 0, then the update process is cancelled.


  • Update rules

- While assigning the Key Figures

Update rules are data target specific and are used to upload data from the infosource to the data targets. Many data targets can be connected to a single infosource.


Update Type

Addition – the key figure is added for the records with the same key. For ODS objects this can also be overwrite, in which case the record overwrites the record with the same key in the ODS object.

No Update – In this case, the key figure is not written to the data target


Update Method

  1. The key figure can be directly updated from the infosource key figure

  2. The key figure can be filled with a formula, in which system fields can be used as well as the other fields from the infosource.

  3. Routine – Update routine could be used

As seen above the COMM_STRUCTURE contains the details of the record, RECORD_NO contains the current record number, RECORD_ALL contains the total number of records in the datapackage.


If the flag ‘Return table’ is clicked, then the routine will not a single record in the form of RESULT, but it will be multiple records in the form of an internal table RESULT_TABLE


  • While assigning the characteristic


  1. Can be directly updated from the source characteristic

  2. Can be updated with a constant

  3. Can be updated with the attribute value of another characteristic from the master data tables

  4. Formula

  5. Routine, as seen below

  1. Initial Value


- While assigning time characteristics, there is an additional possibility ‘Time distribution’.

This can be usefull if the granularity of the infocube in terms of time is less than the granularity of the infosource time field. In such a case, the key figure is distributed equally across the complete time period. For e.g. in the above case, the record for the key figure in the infosource is

Calmonth = 04.2007 Key figure = 1000 KG


When time distribution is applied to this record in the update rules with the time characteristic in the infocube is 0CALDAY, and then in such a case the single record is distributed evenly across the month. So we have the following records in the infocube


Calday = 01.04.2007 Key Figure = 33.33 KG

Calday = 02.04.2007 Key Figure = 33.33 KG

Calday = 30.04.2007 Key Figure = 33.34 KG

Time distribution once flagged is valid for all the key figures in the infocube and not a single key figure.


  • Direct Update of Master Data

Once an infosource is created with a direct update for an infoobject, while selecting the source system in the infosource maintainance, the system creates the three datasources for attributes, texts and hierarchies (if there are texts and hierarchies)


  • Deleting existing attribute of a characteristic with master data

Cannot delete the attribute if it exists in the transfer rules, and hence first need to delete the atrribute from the transfer rules. Otherwise there is no problem, the master data is not deleted, but only the attribute column. Activation might take some time since the master data tables are regenerated. (BW 350)


Problem is with compounding characteristic, when you try to remove this. Since a part of the key is lost, hence data deletion is necessary before removing the compounding characteritic. Following is a sample message while trying to activate the characteristic after deleting the compounding characteristic.


  • Adding an extra attribute to the characteristic with master data

Again no problems. The data is not deleted and the master data tables are regenerated.

  • Adding an additional Characteristic to an existing dimension in an infocube with data

Adding to an existing dimension is all right. The dimension table is adjusted with an additional column, but this column has initial values for the old data. There is no necessity of deletion of the data.


  • Adding an additional Characteristic to a new dimension in an infocube with data.

Adding to a new dimension is all right. The new dimension table is created with initial values as well as the fact tables are adjusted. There is no necessity of deletion of the data.


  • Adding an additional key figure to an infocube with data

This is again not a problem. The data is not deleted and is retained. The old records have the initial value in the key figure


  • Removing Key figures/Characteristics/Dimensions or reassigning characteristics to other dimensions when infocube has data.

This is not allowed. Only when the infocube fact table data is deleted does it allow to do the above things.


  • Note that any changes to the structure of the infocube deactivate the update rules. Hence the update rules need to be activated and readjusted and also maybe the transfer rules.


Aggregates

  • Change run

When an aggregate is defined on the navigational attribute of a charateristic, then if the after a master data load you see that the attribute values of this navigational attribute have changed, then in such a case the loaded master data is not available for reporting unless the attribute change run is executed. If you try to activate the master data, you get the following popup

The master data cannot be activated directly since attributes of the characteristic ZNORDER are used in aggregates.


Procedure

Start the change run via Admin. Workbench->Tools->Apply Hierarchy/Attribute Change, to activate the master data. This means that the aggregates are also adjusted.


Select the infoobject for which the attribute change run needs to be executed and then schedule. The aggregates are readjusted in such a case.


Once the change run is executed, the aggregate is readjusted to make sure it is consistent with the changed attribute values. For e.g


Pior to the attribute change, Process Order = 1000, its navigational attribute is Order Type = YGT1.


The fact table of the infocube, which contains the Process Order looks like this (simplified)

Process Order

KF1

KF2

KF3

10000

1000

2000

3000

10001

1000

2000

3000

10002

1000

2000

3000

10003

1000

2000

3000


Process Order attribute table is as below

Process Order

Order Type

10000

YGT1

10001

YGT2

10002

YGT3

10003

YGT2






The fact table of the aggregate would look like (simplified view)

Package DimID

Order Type

KF1

KF2

KF3


YGT1

1000

2000

3000


YGT2

2000

4000

6000


YGT3

1000

2000

3000


Suppose the attribute Order Type of PO = 1000 changed from YGT1 to YGT3. Now the attribute table will look like

Process Order

Order Type

10000

YGT3

10001

YGT2

10002

YGT3

10003

YGT2


Hence after the change run, the aggregate should look like

Package DimID

Order Type

KF1

KF2

KF3


YGT2

2000

4000

6000


YGT3

2000

4000

6000

  • File Interface - Update modes

As seen above the following update modes are available when uploading data using the file interface

1) Full Upload (ODS Object, InfoCube, InfoObjects)

The DataSource does not support a delta update. If you choose this method, the file will always be copied completely. This method can be used for ODS objects, InfoCubes, and InfoObjects (attributes and texts).

2) New Status for Modified Records (Delta only with ODS Objects - FIL0)

The DataSource supports both full update and delta update. Each record to be loaded provides the new status for all key figures and characteristics. This method can only be used for loading into ODS objects. i.e.the records are supposed to contain only after images and hence cannot be uploaded correctly in an infocube.

3) Additive Delta (InfoCube and ODS - FIL1)

The DataSource supports the additive delta update as well as the full update. The record to be loaded for additive key figures provides only the modification to the key figure. This method can be used for both ODS objects and InfoCubes. The records are supposed to contain both before and after images.

  • Updating data into an ODS object

An ODS object consists of three tables, the activation Queue(/BI¨[0/C]/<ODSNAME>40), active data (/BI¨[0/C]/<ODSNAME>00) and the change log table. The activation queue contains the request id, packet id and the record no as the key. When the request is activated the data is transferred from the activation queue to the active table, which does not contain any information related to the request, but whose key is comprised of the ODS object key fields. The change log holds information about the changes to the records which can be used to supply the delta to the subsequent data targets.

There is also an additional field in the ODS tables which is the RECORDMODE. This field also comes into picture while uploading the data into the infocube. If this field is not set in the communication structure, then it is blank, which means that the record is an after image in the ODS object.

  • 0RECORDMODE

This attribute describes how a record is updated in the delta process. The various delta processes support different combinations of the seven possible characteristic values. If a DataSource implements a delta process that uses several characteristic values, the record mode must be a part of the extract structure and the name of the corresponding field has to be entered in the DataSource as a cancellation field (ROOSOURCE-INVFIELD).

The seven characteristic values are as follows:


1) ' ': The record delivers an after image.

The status is tranferred after something is changed or added. You can update the record into an IncoCube only if the corresponding before image exists in the request.

2) 'X': The record delivers a before image

The status is transferred before data is changed or deleted. All record attributes that can be aggregated have to be transferred with a reverse +/- sign. The reversal of the sign is carried out either by the extractor (default) or the Service API. In this case, the indicator 'Field is inverted in the cancelation field' must be set for the relevant extraction structure field in the DataSource.These records are ignored if the update is a non-additive update of an ODS object.
The before image is complementary to the after image.

3) 'A': The record delivers an additive image.

For attributes that can be aggregated, only the change is transferred. For attributes that cannot be aggregated, the status after a record has been changed or created is transferred. This record can replace an after image and a before image if there are no non-aggregation attributes or if these cannot be changed. You can update the record into an InfoCube without restriction, but this requires an additive update into an ODS Object.

4) 'D': The record has to be deleted.= o ns = "urn:schemas-microsoft-com:office:office" />

Only the key is transferred. This record (and its DataSource) can only be updated into an ODS Object. (The record is deleted from the active table, but in the change log you have the exact reverse image, with negative signs to cancel the records updated into a subsequent infocube for e.g.)

5) 'R': The record delivers a reverse image.

The content of this record is the same as the content of a before image. The only difference is with an ODS object update: Existing records with the same key are deleted. (For ODS object the behaviour is similar to the recordmode = D, the record mode is deleted and the before image is in the change log, with the recordmode = R)

6) 'N': The record delivers a new image.

The content of this record is the same as for an after image without a before image. When a record is created, a new image is transferred instead of an after image. The new image is complementary to the reverse image.

The table RODELTAM determines which characteristic values a delta process uses (columns UPDM_NIM, UPDM_BIM UPDM_AIM, PDM_ADD UPDM_DEL and UPDM_RIM). The table ensures that only useful combinations of the above values are used within a delta process.When extracting in the 'delta' update mode in the extracted records for the indicator, a DataSource that uses a delta process can deliver only those characteristic values that are specified in the delta process.

When a datasource is delta enabled, it means that a delta update is possible for this datasource. This datasource does not supply all the records, but only the new or the changed records in subsequent extractions. How these new or changed records are supplied to BW depends on the delta process. The delta processes are maintained in the table RODELTAM and could have the following default values:

Delta only with Full Upload (ODS or InfoPackage Selection)

A ALE Update Pointer (Master Data)
ABR Complete Delta wth Deletion ID Using Delta Queue (Cube-Cap.)
ABR1 As 'ABR' Procedure, but Serializatn only Request by Request
ADD Additive Extraction Using Extractor (e.g.LIS-InfoStructures)
ADDD As 'ADD' but via Delta Queue (Cube Enabled)
AIE After Images Using Extractor (FI-GL/AP/AR)
AIED After-Images with Delete Indicatr via Extractr (FI-GL/AP/AR)
AIM After Images Using Delta Queue (e.g. FI-AP/AR)
AIMD After Images wth Deletion ID Using Delta Queue (e.g. BtB)
CUBE InfoCube Extraction
D Unspecific Delta Using Delta Queue (Not ODS-Capable)
E Unspecific Delta Using Extractor (Not ODS-capable)
FIL0 Delta Using File Import with After Images
FIL1 Delta Using File Import with Delta Images
NEWD Only New Records (Inserts) via Delta Queue (Cube Enabled)
NEWE Only New Records (Inserts) via Extractor (Cube Enabled)
O

ODS ODS Extraction
X Delta Unspecified (Do Not Use!)

What exactly is the use of the delta process type? Is it to only make sure that the data is interpreted correctly in BW, in the sense that it is uploaded into the correct data target? i.e. datasource with the delta process type = AIE, which supplies after images is not uploaded into an infocube, but is uploaded into an ODS object with overwrite mode.

Checked in transaction RSA2 and the cancellation field (shown in the screenshot below) that has been mentioned is valid only for certain content datasources. It cannot be set for the generic datasources. ThHence generic datasources always send the after image or the additive images. The recordmode can be adjusted or set only in the transfer rules or update rules to affect the update of data into the ODS/Infocube

  • Important finding related to the update rules and ODS

While updating into an ODS object, the 0RECORDMODE is taken into consideration. Checked the generated program, and it appears that the 0RECORDMODE is of no consequence while uploading data into an infocube but only in an ODS object. If the ODS object data field is set to Overwrite, then the before images in the extracted records are ignored in the update rules otherwise if set to Addition, then the before and after images are also taken into consideration.

Program RSODSACT1 is used to activate the ODS object data. It calls the function RSSM_PROCESS_ODSACTIVATE to activate the data.

A set of template programs are used to read/write into ODS objects. Please see the RSTMPL* programs in SE38 to get a more detailed idea.

The template used to generate the activation program for the ODS is RSDRO_ACTIVATE_TMPL

The recordmode is the field which indicates what kind of a record it is. Whether it is a deletion image, or a reverse image or before or after image. This seems to be of significance while uploading data into an ODS object, which automatically takes care of updating the change log, so that this data can be uploaded into a subsequent cube. It could also be uploaded into another ODS object and hence the recordmode comes into picture, but the recordmode is not used when updating the data into the cube.

  • ODS Object – BEx reporting flag

With this indicator you determine whether the ODS Object</ is immediately available for BEx queries.

If the indicator is switched off then no SIDs for the new characteristic values have to be taken when activating the data in the ODS object. This improves the performance of the activation.

Switch off this indicator for all ODS objects that are essentially used for further processing into other ODS objects or InfoCubes. It is still possible to define InfoSets with the ODS object and to carry out queries on it.

Beginning

Hello all,

Have started preparing for the SAP BW certification exams for the 3.5 release and thought about posting my findings and notes here. Maybe would be helpfull to others preparing for the same. Nothing advanced, but just the obscure details in the form of notes.

You can also find this document shared on google docs at the following link:
http://docs.google.com/Doc?id=dhtpp2mg_5g6ppdg

The findings are organised in the form of points, related to different areas. Will revise these if needed. Please feel free to post your comments and feedback.

Thanks, Ned