Sunday, June 30, 2024

Possibilities of BPC package prompt's UI

There are a few options how to setup an input given by a user by a popup windows of DataMart script (DS).

First point of view whether the input fields in prompts should support browsing a BPC dimension hierarchies.

If the hierarchy browsing is not required then the prompt can be setup as s simple text field. Means PROMPT commands is a type of TEXT.

DS:

PROMPT(TEXT,%SRC_TIM%,"Enter Source TIME","%TIME%")

TASK(/CPMB/DEFAULT_FORMULAS_LOGIC,REPLACEPARAM,SRC_TIM%EQU%%SRC_TIM%)

 

In case the hierarchy browsing is needed the PROMPT command is a type of COPYMOVEINPUT:


DS:

PROMPT(COPYMOVEINPUT,%SRC_TIM%,%TGT_TIM%,"Select the TIME members from to","TIME")

TASK(/CPMB/DEFAULT_FORMULAS_LOGIC,MEMBERSELECTION,SRC_TIM%EQU%%SRC_TIM%%TAB%TGT_TIM%EQU%%TGT_TIM%)

 

Disadvantage of above option that support the hierarchy browsing is that in case there are several dimension values to be entered they are placed on a new pop-up window.

In case use just prefers to have a one pop-up window with all the values following option can be used. It is again the PROMPT command type of SELECT:



DS:

PROMPT(SELECT,%SELECTION_IN%,,"Select data to copy","TIME,VERSION")

TASK(/CPMB/FX_RESTATMENT_LOGIC,REPLACEPARAM,SELECTION_IN%EQU%%SELECTION_IN%)

 

More information:

TEXT Prompt() Command

COPYMOVEINPUT Prompt() Command

SELECT Prompt() Command


Saturday, June 1, 2024

SAP Datasphere (and BW bridge) limitations

When comparing SAP Datasphere to the traditional SAP BW system, several technical limitations become evident. Here are some of them listed out:

 

1 Mature Feature Set / Functionality Gap

SAP BW has a more mature and extensive feature set due to its long history, including advanced data modeling, ETL capabilities, and built-in analytics, which SAP Datasphere may lack. As an example:

·        OLAP Engine/BW Analytical Manager functionality not supported e.g., no analysis authorizations, no query as InfoProvider, no query execution, no calculation of non-cumulative key figures (Inventory management)

·        No add-ons (e.g. SEM-BCS) supported

 

2 Data Modeling related to SAP HANA

·        Generation of External SAP HANA Calculation Views not possible

·        Not possible to use SAP HANA Calculation Views as part of Composite Provider

·        No planning scenarios supported

·        Temporal joins in Composite Provider supported

·        Many processes used process chains are not supported (e.g. ABAP program execution process, archive, job event, BODS related processes etc.)

·        Ambiguous Joins not supported

·        BADI Provider as PartProvider not supported

·        Open ODS View without calculation scenario not supported

 

3 Data Integration

·        Connection to source systems supported only via ODP technology as well as push scenarios only.

 

4 Performance and Scalability

·        Current sizes of datasphere instances are limited to 4TB, which may not be enough for some organization that runs bigger BW systems than that.

 

5 Reporting and Analytics

·        No BW (BEx) query support

·        User Exit BW (BEx) variables

·        Unit conversion

·        Constant selections in BW (BEx) reports

·        No BW (BEx) Query variables support in DTP’s filters

 

6 Application development

·        Applications development is not supported. Only applications to be done using SAP BTP.

 

7 UI/UX

·        No SAP GUI access

 

 

These technical limitations highlight the areas where SAP Datasphere is still catching up to the more established and mature SAP BW system. As SAP continues to develop Datasphere, many of these limitations may be addressed over time.

Tuesday, April 30, 2024

aDSO: Validity and Reference-Point tables

As mentioned in my post Storage data target type in BW Request Management there are several tables that store data for aDSO objects. Except those very well know like active, inbound, change log tables there are also others.

In case the aDSO is type of Inventory:



There are also below two tables available (followed by namespace):

Validity Table: /BIC/A<technical name>4

Reference Point Table: /BIC/A<technical name>5

 

A purpose of the inventory-enabled aDSO is to manage noncumulative key figures. A non-cumulative measure, in the context of data analysis or statistics, refers to a metric or variable that does not accumulate or aggregate over time or across categories. In other words, it represents a single point or snapshot value rather than a total or sum.

E.g. if there is a need to analyzing sales data for a particular day, the number of units sold on that day would be a non-cumulative measure. It doesn't consider sales from previous days; it just reflects the sales for that specific day.

Non-cumulative measures are often used when you need to examine data at a specific point in time or within a specific category without considering historical or cumulative values. They are particularly useful for analyzing trends, patterns, or comparisons within discrete units of analysis.

The tables are available also via t-code RSMNG in Utilities menu: Display Validity tab and Reference-Point tab:



Friday, March 29, 2024

Activation of SNP Glue objects after transport

In my last blog I wrote about SNP Glue transport.  Once the Glue objects are moved to the target SAP system there is one more step to be executed before the objects are ready to be used. It is so called objects activation step. It can be performed in dedicated t-code (/DVD/GLTR - DataVard Glue Transport Workbench). There is a button called Activate Request in the toolbar.


Once you hit that button you get a pop-up window to provide a TR which objects are to be activated.


Next pop-up lists out all the objects that are in that TR. One can choose if particular objects is to be activated or not. Also there is a possibility to delete the Glue objects by selecting Delete checkbox. One more check box is to be checked if the objects are new and are to be created/changed. 


Once this pop-up is confirmed the objects are being physically activated and afterwards the objects are visible in Glue Object navigator – t-code /DVD/GL80 and are ready to be used.

Transports of SNP Glue objects

In my previous post related to SNP Glue integration tool I described how the tool can be leveraged to transfer data between Snowflake and SAP BW/4HANA systems. However I left out a part on how to transport the SNP Glue objects. Thus I write about it in this post.

There is a dedicated t-code (/DVD/GLTR - DataVard Glue Transport Workbench) to manage transports the objects developed for the Glue.


The Glue transport t-code allows to collects the objects for the transport either by package/user name/object name. Once the objects are entered in the selection screen there is a button Execute. Once it is selected the objects are written to the regular SAP STMS transport request. Here’s such a request looks like:



Entries in the TR are basically the transparent table entries. These values in the table entries will be transported to the target SAP system. This is because the Glue transport contains meta data objects (objects definitions), further there are ABAP objects like classes and programs. Based on this meta data - the physical Glue objects are recreated in the target system upon objects activation.

Below are some of the table entries that carries the meta data per particular Glue object:

/DVD/GL_T_DD_TH – Glue table header objects

/DVD/GL_T_E2_MAP - Persistent Transformation/mapping

/DVD/GL_T_E2_OBJ - Extractor 2.0 object directory

/DVD/GL_T_E2_OBP - Extraction object's parameters

/DVD/GL_T_FLD_OB - Glue objects mapped to logical folder

/DVD/GL_T_FOLDER - Table for mapping folders inside packages

 

Once the objects are collected into the TR via t-code /DVD/GLTR then the TR can be regularly released like any other SAP transport in e.g. SE10 t-code. At this point the TR is ready to be imported to target SAP system.



Sunday, March 24, 2024

Simple Snowflake to SAP BW data flow via SNP Glue

I was recently involved in a data transformation project leveraged SNP Glue tool. The SNP Clue (formerly Datavard Glue) is designed to integrate and connect various SAP systems and non-SAP systems, facilitating data exchange, synchronization, and consolidation.

In my scenario we used the Glue to transfer the data located in Snowflake system in the cloud to on-premise based SAP BW4/HANA system. There are a couple of the Glue objects that need to be developed to enable the information exchange between the Snowflake and the SAP BW/4HANA.

1. Glue Storage

2. Glue table

3. SAP table

4. Glue Fetcher

5. Glue Consumer

6. Glue Extraction Process

7. SAP BW Datasource

8. SAP BW target infoprovider – e.g. aDSO

 

All these objects are developed in SAP BW system where the Glue tool is installed. Main component of the Glue is a Cockpit (SNP Glue Cockpit) that can be access via t-code /DVD/GLUE (ABAP program /DVD/GL_MAIN). Here all other parts of the Glue can be accessed from. The Glue can be also installed as add-on too. In this case there is a component called Glue in the particular installation of the SAP BW.



1. Glue Storage – it is a central object that encapsulates all information needed to connect to the remote object where the data will be read/write – from/to. The Storage is an SNP objects that is shared between their tools (e.g. SNP Outboard). Below are settings that are needed to be provided to the Storage object in order to connect to the Snowflake. There is a dedicated t-code (/DVD/SM_SETUP) to maintain the storage. There are 2 storages that need to be defined for connection to the Snowflake:

1.1 Internal Glue Storage

Storage ID – logical name, defined by the Glue developer, usually must follow a naming convention given by particular developer guideline

Storage type – predefined as SNOW_STAGE, type binary

Description – free text, should be describing e.g. meaning of data that is being transformed

Java connector RFC – id of connection to JAVA connector (JCo). The JCo is mandatory and it must be installed in SAP BW system in order to use the Glue. Similarly as Storage it-self the JCo connection are shared across different SNP tools. There is a special t-code (/DVD/JCO_MNG) to setup and control the JCo connections.

Account name - name of the Snowflake account, in the format: <account name>.<region>.<platform>

User role – haven’t used this

JAVA Call Repeat – 0 by default

Repeat delay (seconds) - 0 by default

Driver path – path to the JDBC Snowflake driver, located at SAP server

Connection pool size – 0 by default

Username – user at Snowflake side

Password – of the user above

 

1.2 Glue Storage

Storage ID - logical name, see above in internal storage. Notice that the ID is different from the ID used in internal storage.

Storage type - predefined as SNOWFLAKE, type TAB, for transparent storage

Description – see in internal storage

Referenced storage – id of internal storage, main storage uses internal one

Java connector RFC - see in internal storage

JDBC Call Repeat – 0 by default

JDBC Repeat delay (seconds) – 0 by default

Account name - see in internal storage

Warehouse – existing warehouse in the Snowflake account, WH that will be used to perform computing operations like SQL in the snowflake

Database name – name of the database in the Snowflake

Database schema – name of the database schema in the Snowflake

User Role – user role in the Snowflake

Driver path – Snowflake drive path in SAP server

Hints - string that is added to connection string when JDBC driver establishes the connection

Connection pool size – 0 by default, number of connections that can be kept open in the pool

File Type – CSV or Parquet, I used CSV type

Table name prefix - prefix of all Glue tables created within this storage

Use Snowflake App for data merge – haven’t use it, if enabled, the SNP Native app is used

Wrap values in staged CSV files - haven’t use it

Data delivery guarantee – EO (Exactly-once), Data transfer behavior

 

All other Glue objects listed below are maintained in Object Navigator part of the SNP Glue Cockpit. The objects navigator can be accessed by dedicated t-code - /DVD/GL80.


2. Glue table – can be also maintained in dedicated t-code /DVD/GL11. The Glue table is metadata object, which represents the remote object (Snowflake view in my case). It allows to work with the data from that remove object in the SAP landscape. It contains a list of the columns from table or view of in remote DB. Again in my case it is a list of the columns in my Snowflake DB’s table. The Glue Table must be activated before it can be used in other Glue objects. This table doesn’t store data at SAP side persistently. The data is only read from remote DB when the extraction process runs.

 

3. SAP table – SAP system DDIC table that stored the data physically once it is fetched from the remote DB. The data is stored persistently in here.

 

Below objects (fetcher, consumer and extraction process) are part of the Glue Extractor 2.0.

4. Glue Fetcher - allows data transfer from source to target objects. It refers to the SAP Glue Table objects. It defines weather particular columns in remote table/view is used for selection. Further it defines what delta mechanism type is used (FULL, DATA, TIMESTAMNP, VALUE, VALUE_ DIST). In my case I used FULL extraction and cursor as well.


5. Glue Consumer – comes into the picture when the data is written into the target. In my case it specifies the SAP Table I created in step no 3.


6. Glue Extraction Process - object responsible to run whole process from reading the data from the source (by the Fetcher) to writing the data by Consumer. The Extraction Process can also be used to manage the execution of extraction and monitoring of launched extractions. It has similar capabilities like running DTP process or process chain in SAP BW.

Here the Fetcher and Consumer need to be specified during a design time. Whole process is driven by generating a Z* report. You can find the report name in Generated report name field. There is also possible to define a data transformation by specifying the rules and start/end routines. In my case I haven’t used any routines neither any transformation. I used just pure 1:1 mapping.


7. SAP BW Datasource – in my scenario I created a custom DS to be able to store data into the BW aDSO object. The DS was based on the SAP table object I created in step no 3.


8. SAP BW target infoprovider – e.g. aDSO that is my final storage of Snowflake data in the SAP BW. This is an object I used further for reporting – over composite provider.

 

To run the data replication from the Snowflake to the SAP BW/4HANA I used a process chain. The 1st step in the chain was to run Glue extraction process. By this I got the Snowflake data to SAP Table. From there I used DTP to load the data to my aDSO.

 

More information:

SNP Glue product page

SNP Glue documentation



Thursday, February 22, 2024

How to switch BPC environment state (on/off line)?

In one of my older posts, I explained few ways how to check BPC environment state. Either it is online or offline. This time I focus on how to change the state of the BPC environment.

1. in BPC web client: In main screen of web client choose Settings button on top right corner. From there click on Environment button in a drop down Settings menu.

There is a new popup shown as next with a listing all available environments in the system. In a bottom part of the window there is a “Manage All Environments” button. Click on that one. 

There is a new screen called Manage All Environments shown as a next. Form here choose the environment which status do you want to change and clink on “Change Status” bottom on the screen toolbar. 


As a last step there is a new popup window where the environment status can be changed via radio button. 


2. in SAP BPC backend: Use t-code SE16 to browse table UJ0_PARAM_APP. Put following selection into the selection screen.

APPSET_ID = <your BPC environment tech name>

FIELD = AVAILABLEFLAG


In a VALUE column of the table there is either 0 or 1. Zero means the environment is to offline and one means it is set to online.


3. in SAP BPC backend – custom program: There can be a small ABAP program created. That changes the content of VALUE column in table UJ0_PARAM_APP. That can be done via a call of method SET_APPSET_STATUS  of class CL_UJA_APPSET.

 

More information:

How to check BPC environment state (on/off line)?

Saturday, January 13, 2024

System does not let you log on using a password

In case user is getting a below message on attempt to logon to SAP NetWeaver/ABAP Platform based system:

This system does not let you log on using a password

Message no 00139

 

It means that password based logon to particular system was disabled. In such a case user has to use different authentication method to logon. There are following:

• Using password (conventional logon) – method that is not possible in this case

• Using an external security product (SNC)

• Using an X.509 browser certificate (intranet/Internet)

• Using a Workplace Single Sign-On (SSO) ticket

 

In case majority of the users in respective SAP system are using a different type of the authentication than password then it should be disabled. The disabling can be done system profile parameter called login/disable_password_logon.

The parameter’s default value is 0 that means the password-based logon is possible. Other allowed values are:

1 - Password logon only for users of the group specified by login/password_logon_usergroup

2 - Password logon is no longer possible in general


More information:

Profile Parameters for Logon and Password (Login Parameters)

320991 - Error codes during logon (list)

Tuesday, December 26, 2023

Request Status Process Management (RSPM)

Request Status Process Management (RSPM) concept for BW request management is in place since a higher support packages of SAP BW 7.5 and also in SAP BW4/HANA. It replaces RSSM (Request Status Management) request management based on 0REQUID. The Request Status Process Management (RSPM) based on Request Transaction Sequence Numbers (TSN, 0REQTSN). Details can be found here.

However, in this post I want to mention few interesting TSNs that are used in BW systems. The TSN is referred by RSPM_TSN domain in DDIC. There are following special TSN defined in class interface IF_RSPM_CONSTANTS.

 

transfer_min = '18991231000000000000000' = {1899-12-31 01:00:00 000000 CET}

transfer_min_old = '19000101000000000000000' = {1900-01-01 01:00:00 000000 CET}

transfer_max= '19680120031408000000000' = {1968-01-20 04:14:08 000000 CET}

min = '19700101000000000000000' = {1970-01-01 01:00:00 000000 CET}

reorg_low_min = '19700101000000000001000' = {1970-01-01 01:00:00 000001 CET}

reorg_low_max = '19700101000000999998000' = {1970-01-01 01:00:00 999998 CET}

housekeeping_min = '19720101000000000001000' = {1972-01-01 01:00:00 000001 CET}

housekeeping_max = '19723112235959999998000' = {????-??-?? ??:??:?? ?????? UTC}

dummy = '20140206171506000000000' = {2014-02-06 18:15:06 000000 CET} - timestamp of data element RSPM_TSN

real_min = '20140206171506000001000' = {2014-02-06 18:15:06 000001 CET} - as above plus 1 µs

real_max = '28991231235959999998000' = {2900-01-01 00:59:59 999998 CET}

not_activated = '29000101000000000001000' = {not activated} - dummy TSN for not yet activated load requests

simulation_min = '29000102000000000001000' = {2900-01-02 01:00:00 000001 CET

simulation_max = '29000102000000999999000' = {2900-01-02 01:00:00 999999 CET}

reorg_high_min = '29991231235959000001000' = {3000-01-01 00:59:59 000001 CET}

reorg_high_max = '29991231235959999998000' = {3000-01-01 00:59:59 999998 CET}

max = '99991231235959999999999' = {9999-12-31 23:59:59 999999 UTC}

nc_reporting = '99999999999999000000001' = {????-??-?? ??:??:?? ?????? UTC}

nc_user = '99999999999999000000002' = {????-??-?? ??:??:?? ?????? UTC}

nc_temp = '99999999999999000000003' = {????-??-?? ??:??:?? ?????? UTC}

 

More information:

BW request types: RSSM vs RSPM

PC does not start due to not maintained start variant

I faced recently a strange error within my newly created process chain. On attempt to start it up, I got below error message:

Job could not be released. Probably an authorization issue

Message no. RSAR051


I was sure that it has nothing to do with the authorization. I checked what user is assigned into the chain in its attributes as Execution User. That user did not lack any authorization too.

It turned out that it was just a missing chain’s start variant where I did not specified scheduling condition part. There was no either Intermediate Start/Date Time/After Job/After Event/Operation Mode data entered.

Once I added it the chain worked just fine.



Error message is issued within FM RSSM_PLAN_START_BATCH_JOB somewhere around line 661. There a return value of variable l_job_released of another FM (JOB_CLOSE) call is evaluated:

MESSAGE s051(rsar) WITH 'Job could not be released.'(210) 'Probably an authorization issue'(211) '' ''.

However real reason why job schedule failed is not really properly propagated. This is fixed in SAP Note 3164846 which introduces a CASE/WHEN statement to do that.

 

More information:

3164846 - Error messages from job scheduling are not returned to caller

Saturday, December 16, 2023

"Results table" feature in SAP Code Inspector

SAP Code Inspector (t-code SCI) is a tool for analyzing and inspecting ABAP code. The tool is to help ABAP developers to ensure that ABAP code complies with certain coding standards, performance guidelines, and best practices. It performs static code checks, which means it analyzes the source code without actually executing it. The tool identifies potential issues, vulnerabilities, and areas for improvement in the code, facilitating the development of more robust and efficient applications.

Here I want to mention one feature of the SCI that is most likely not very known. It is called a "Results table". It is a button on SCI’s toolbar:

View of "Results table" provides comprehensive view where checks from the SCI and ABAP Extended Program Check (t-code SLIN) are combined.

In a picture below – issue marked in red are reported by the SCI. Issue marked in green are reported by the SLIN.


The "Results table" button is a new function that is added into SAP NetWeaver system as of SAP_BASIS 702.


BW/4HANA 2.0: External SAP HANA SQL View

There is a new feature in BW/4HANA 2.0 starting with Initial Shipment Stack. In this version of the SAP BW4/HANA 2.0 is external SAP HANA SQL View introduced. The new view follows one of principles of SAP BW4/HANA, which is openness.

In older releases of SAP BW4/HANA there wasn’t dedicated SQL view to support an extraction from aDSO objects. As SAP doesn’t recommends to use active table of the aDSO for extraction (see SAP Note 1682131) due to the fact that those tables are no publicly released. In the case of SAP BW4/HANA versions lower than BW/4HANA 2.0 SAP recommends to either use the HANA Modeler to import BW InfoProvider metadata and generate a HANA Model or to generate the HANA Model for BW InfoProvider from the BW backend.

Now with the respect to BW/4HANA 2.0 the new feature brings a new SQL view that is supposed to be used for an extraction of SAP BW objects like aDSOs or InfoObjects (see SAP Note 2723506). A naming convention for the new SQL view ends *8. That means on top of other DDIC tables related to the aDSO object there is a new table /BIC/A<aDSO_tech_name>8. E.g./BIC/AZMMA_ADSO8 - View for external Access for DataStore ZMMA_ADSO.

A full list of DDIC tables/views for my example aDSO is ZMMA_ADSO:

 

TABLES:

/BIC/AZMMA_ADSO                Inbound Table for DataStore ZMMA_ADSO

/BIC/A ZMMA_ADSO1              Inbound Table for DataStore ZMMA_ADSO

/BIC/A ZMMA_ADSO2              Active Data Table for DataStore ZMMA_ADSO

/BIC/AZMMA_ADSO3               Change Log for DataStore ZMMA_ADSO

VIEWS:

/BIC/AZMMA_ADSO6               View for Extraction from DataStore ZMMA_ADSO

/BIC/AZMMA_ADSO7               View for Reporting for DataStore ZMMA_ADSO

/BIC/AZMMA_ADSO8               View for external Access for DataStore ZMMA_ADSO

 

The views ending with 6 and 7 were generated by BW4/HANA 1.0 and view ending with 8 was generated in BW4/HANA 2.0. From my experience (based on BW4/HANA 2.0 SP05) the *8 view doesn’t get generated by aDSO activation right away. In my case it was generated at the time I loaded the data into the aDSO for 1st time.

 

More information:

How to read/write/delete from/to aDSO objects

1682131 - SAP BW tables in SAP HANA Information Views and ABAP CDS Views not supported

2723506 - External SAP HANA SQL View with SAP BW/4HANA 2.0