Thursday, October 31, 2024

How to clear cache in SAP Analysis for Microsoft Office

When SAP Analysis for Microsoft Office (AfO) is being reinstalled or upgraded there can be several errors popping up causing not possible to reuse/refresh the existing reports. Basically, the AfO is crashing or freezing with errors like following.

 

"An exception occurred in one of the data sources. SAP BI Add-in has disconnected all data sources. (ID-111007)"

"Nested exception. See inner exception below for more details."

 

Root cause of the errors like those is that when the AfO is upgraded/uninstalled, the process process does not clear the cache of the AfO. To correct it the cache needs to be cleared manually.

First folder of the cache to look for to be cleared is:

"c:\Users\<USER_ID>\AppData\Roaming\SAP AG\SAP BusinessObjects Advanced Analysis\cache\"

In this folder the cache files located that falls under a naming convention like this:

<SID>.cache

 

Other folder to look for is COF (Common Office Framework) directory out of %APPDATA% that is accessible under link:

"%APPDATA%\SAP\Cof"

It that points to folder:

"c:\Users\<USER_ID>\AppData\Roaming\SAP AG\SAP BusinessObjects Advanced Analysis\cache\"

 

Once the cache is cleared start the AfO from Windows Start menu (All Aps -> SAP Business Intelligence -> Analysis for Microsoft Office) and it should be possible to refresh AfO reports.

 

More information:

2979452 - An exception occurred in one of the data sources. SAP BI Add-in has disconnected all data sources [1e04-3ff5-15]

AfO wiki

Friday, October 25, 2024

Scheduling process Chain in alternate time zone

There is an option to run the process chain in different time zone available in BW systems. It is available in start variant of the PC. There is a new checkbox “Use Alternative Time Zone”. Once it is checked a new field shows up.

The feature can be useful in case BW admin is not aware of what time zone the BW system runs in. So, the alternative time zone can be used.

Once there is an alternative time zone specified for specific PC’s variant it is saved in table RSPCTRIGGER and in field TMZONE.

Same functionality is leveraged in SAP standard background jobs.



Monday, September 30, 2024

Different product lines of SAP BW

In some cases there is a confusion about versions of SAP BW which were introduced during all the years (1st version appeared up around 1998). Let me briefly sort this out. This blog post is not that comprehensive but tries to put a naming conversion of the major releases of the BW straight.

 

1. SAP Business Warehouse Classic (classic BW) aka SAP NetWeaver based Business Warehouse (component SAP_BW), runs on any DB, see details:

SAP Business Warehouse 3.5 part of SAP NetWeaver 04

SAP Business Warehouse 7.0 part of SAP NetWeaver 2004s (NW’04s) aka NetWeaver 7.0

SAP Business Warehouse 7.3

SAP Business Warehouse 7.4

SAP Business Warehouse 7.5

These versions of the BW are sometimes referred as SAP NetWeaver all versions aka BW 7.x

 

2. SAP Business Warehouse powered by SAP HANA aka BW on HANA (component SAP_BW), runs on SAP HANA DB only, see details here

 

3. SAP BW4/HANA (component DW4CORE), see details here or here, BW4/HANA was based on BW 7.5 but redeveloped, many components were removed, and it is not based on NetWeaver stack anymore.

 

If there is a term classis BW used, what is meant by that is the BW based on SAP NetWeaver stack. Means all the versions starting with 3.5 up to BW on HANA including. However difference between 7.x and BW on HANA is that 7.x supports any database but the BW on HANA runs on HANA DB only.

 

More information:

Short history of SAP BW

SAP BW/4HANA (B4H) versions


Monday, August 12, 2024

SAP S/4HANA Cloud Public vs Private Edition?

SAP S/4HANA Cloud is an enterprise resource planning (ERP) suite offered by SAP, and it comes in two primary deployment options: Public Edition and Private Edition. Each offers different features, levels of customization, and deployment flexibility to cater to various business needs. In general, below is a breakdown of the differences between the two:


SAP S/4HANA Cloud Public Edition is better deal for organizations that want a standardized, effective, and quickly deployable ERP solution with minimal customization needs.

On other hand its Private Edition is better suited for organizations that require a highly customized ERP environment, need control over their system updates, and are willing to invest in a more flexible and powerful deployment model.

3 Tier Model to get to ABAP Cloud

Customers who want to migrate tier SAP ERP systems to a cloud need to embrace the cloud from ABAP perspective too. This shift is needed as within on premise SAP ERP systems Classic ABAP extensibility options (user/customer exits, BADIs, enhancement points, modifications, append, structure, menu exits, etc.). All of these were used to tailor SAP systems to meet specific business requirement. But since the introduction of cloud the Classic ABAP extensibility options are not supported anymore in the cloud bases SAP ERP systems.

Apparently majority of SAP customers won’t start with move to the cloud by having a new greenfield implementation of ERP system like S4/HANA Could is. Therefore, SAP had to come up with something that would enable the cloud transition for the existing customer running their ERP systems on premise. It is a 3-tier Extensibility Model. Its purpose is to enable the transition from Classic ABAP to ABAP Cloud. It is also intended to manage a coexistence of these different extensibility models.

Remember much used term - "clean core"? Well as it means up2date, transparent, unmodified SAP system. All these adjectives describe a system that is cloud compliant. Reason why it is important is that as in the cloud all the customers use same base code line and changes are applied to all customers simultaneously. Therefore, there is a no way to allow each individual customer to implement enhancements in the same way that they could in their on premise systems.

 

Tier 1 – Cloud development: default choice for all new extensions and new custom applications following the SAP S/4HANA Cloud extensibility model. Goal is to get to this tier form the lower tiers.

 

Tier 2Cloud API enablement / API layer: if there are any objects (BAPI, Classes, Function Modules, Core Data Services) that are not yet released by SAP and are required in tier 1 a custom wrapper is created for them. By this a missing local public APIs or extension points are mitigated. The custom wrappers are built and released for cloud development. Once there is SAP released public local API, the custom one can be set as obsolete and removed. ABAP Test Cockpit (ATC) can be leveraged inhere to force ABAP Cloud guidelines. Also via ATC exemptions violation of the ABAP Cloud rules can be managed.

 

Tier 3Legacy Development / classic ABAP development: classic extensibility based on classic ABAP custom code that is not supported in the ABAP Cloud development model. E.g. BAPIs, user exits, modifications, SAP GUI, file access, reports writing to GUI, etc. The goal is to avoid developments in this tier and follow the ABAP Cloud development mode. However, as the customer is at this stage the classic objects are to be modernized and moved to the tier 1. Those need to be refurnished one-by-one there is no any tool for that.

 

Now when it comes to real (re)development of the objects in the particular tiers. A concept of software components is used in here. By creating its own component, the object is separated from the others (e.g. non-clean core components – remember clean core). This is because the component puts stricter ABAP Cloud rules to the objects thus separation is needed.

For all the details how to work with the object within specific tier follow below SAP official guidelines.

 

More information:

Clean Core

ABAP Cloud API Enablement Guidelines for SAP S/4HANA Cloud, private edition, and SAP S/4HANA - overview

ABAP Extensibility Guide - overview

ABAP Cloud - How to mitigate missing released SAP APIs in SAP S/4HANA Cloud, private edition and SAP S/4HANA – The new ABAP Cloud API enablement guide

SAP S/4HANA Extensibility: All You Need to Know

Wednesday, August 7, 2024

MERGE process in SAP BW

In SAP BW systems running on SAP HANA database there is a process called merge. Data being written into (BW) objects/tables are first saved in uncompressed delta index. Merge process is a transfer of uncompressed delta index table data to performance-optimized storages. In order to fully leverage those performance-optimized storages the merge process must be started in regular intervals. In case the data would be just staying in the delta index storages there would be an increased memory consumption (delta storages tables may become very large) that could lead to system instability.

The merge it-self can be executed based on several reasons. There is a term - merge motivations that is a mechanism by which a delta merge operation is triggered. From SAP HANA DB depending on how the merge process is triggered there are following types of the merge process:

Delta merge - operation in the column store of the SAP HANA database that moves changes collected in the delta storage to the read-optimized main storage.

Smart merge – delta merge operation triggered by a request from an application. In SAP BW this term is referred as DataStore smart merge.

Hard merge – delta merge operation triggered manually by SQL statement.

Forced merge – delta merge operation triggered manually by SQL statement that passes an optional parameter to execute the delta merge regardless of system resource availability.

Memory-only merge - delta merge operation triggered manually by SQL statement that passes an optional parameter to omit the final step of the delta merge, that is, persisting main storage to disk.

Auto merge – automatically executed merge by database system.

 

Smart Delta merge in SAP BW trigger can be set on DTP level. In the DTP maintenance t-code RSDTP on Update tab there is a checkbox called 'Trigger Database Merge'. There is also a process type in Process Chains called 'Trigger the delta merge' that can be used to trigger the merge process.

Hard delta merge after request deletion in aDSO can be driven by RSADMIN table parameter called 'RSDSO_HARD_MERGE_AFTER_REQDEL'. It can be set to X to enable hard delta merge after the request deletion.

 

More information:

SAP HANA Delta Merge Related To SAP BW/4HANA

3481267 - Hard delta merge after request deletion

Tuesday, August 6, 2024

'accounts.sap.com' vs 'account.sap.com'

With an introduction of SAP Universal ID (UID) there are two different sites for managing SAP sites’s user accounts. One is https://accounts.sap.com and another https://account.sap.com. Kindly note the difference, singular 'account' vs plural one - 'accounts'

Difference between the two is with respect to what accounts are they supposed to manage. For S-user or P-user and Universal IDs type of the accounts there are different platform thru which the two are managed.

S-user/P-user type of the accounts are managed via singular 'accounts' - https://accounts.sap.com (Profile Management).


Whereas the SAP Universal ID are managed via plural 'account' - https://account.sap.com (Universal ID Account Manager).


Users can use the respective sites to manage passwords and account information too. For example, if the user  wants to reset a UID password he or she needs to make sure 'account.sap.com' is used. On other hand if he or she wishes to reset an S-User or P-User ID password (if that ID is not linked to a UID) then you would use 'accounts.sap.com', site.

 

More information:

SAP Universal ID (UID)

Something about SAP Service Marketplace (S-user or S*user) ID

Thursday, August 1, 2024

What are 1972 requests in BW?

In my earlier post about RSPM (Request Status Process Management) I mentioned special data loads request numbers (so called TSN - Transaction Sequence Numbers). One of them is called housekeeping request. ID of those request starts with 1972 thus they are sometimes called as 1972 request or TSN from year 1972 or 1972-replacement request.

Request format:

{1972-XX-XX XX:XX:XX XXXXXX XXX}

Example of such a request can be:

{1972-01-01 01:00:00 001359 CET}      

Not sure why particular year 1972 was chosen. Perhaps it has something to do with a fact that SAP was founded in 1972 ? Not sure, probably not :-)

Example of the 1972 request as it was seen in RSMNG t-code:

More information:

Request Status Process Management 

BW request types: RSSM vs RSPM


Tuesday, July 30, 2024

DTP: Track records after failed request

There is an option available for DTPs on its Update tab called “Track Records after Failed Request”. When it is checked, it means that BW systems builds cross-reference table. The cross-reference table that is built when the upload request fails. The table traces an erroneous records of the data load.


This option can only be selected when error handling is set to “Terminate request; no record tracing; no updating”, e.g. when error handling is deactivated.

This option helps performance of data load because there is no tracking of error records during data load process.

Normally when the option is checked for a particular DTP there is a warning message as below shown:

'Attribute 'Automatically Switch Record Tracking On: Value 'X' is obsolete'. RSBK453


This message is shown as there are certain checks executed by a method _CHECK_GENERAL call (of class CL_RSBK_DTP_V). This method calls another one CL_RSBK_DTP_API=>ADMISSIBLE_GET that populates the value. If the check box is enabled the values is equal to following attribute: CL_RSBK_DTP_API=>C_S_ATTRIBUTE-TRACK_RECORDS.

Just to add; SAP recommends (e.g. here for BW/4/HANA or for DataSphere) activating error handling only if errors occur during execution of the DTP. So not by default. If error handling is activated, the data records with errors are written to the data transfer intermediate storage, where they can be corrected before being written to the data target using an error DTP.


Saturday, July 27, 2024

DTP: No filter defined for mandatory selection field

There can be an error message displayed as below on attempt to run or activate the DTP:

Filter for field xxx (InfoObject xxx) is mandatory; Input a selection

No filter defined for mandatory selection field xxx


Does it mean that there can be a mandatory field on DTP filter? Well, it depends...


1. If BW is classic BW system or it is BW/4HANA 1.0 based system and if the target objects for the DTP is CompositeProvider (HCPR) - DTP Filter Routine are not checked for mandatory fields. This can be solved via implementing SAP Note 2438744 (see also Note 3375464).

 

2. If as source object is calculation view based on HCPR (and the BW system is based on NetWeaver or any BW/4HANA version) there are couple of Notes to be implemented. Start with the Note 2813510.

 

More information:

2438744 - DTP Filter Routine are not checked for mandatory fields in combination with CompositeProvider as source

3375464 - Error "Filter for field /BIC/FXXXXXXX (InfoObject XXXXXXX) is mandatory" occurs while activating the DTP.

2813510 - No filter defined for mandatory selection field

2760751 - DTP: "No filter defined for mandatory selection field XYZ" displays during the Check / Activation of the DTP which has the target as SPO.

Sunday, June 30, 2024

Possibilities of BPC package prompt's UI

There are a few options how to setup an input given by a user by a popup windows of DataMart script (DS).

First point of view whether the input fields in prompts should support browsing a BPC dimension hierarchies.

If the hierarchy browsing is not required then the prompt can be setup as s simple text field. Means PROMPT commands is a type of TEXT.

DS:

PROMPT(TEXT,%SRC_TIM%,"Enter Source TIME","%TIME%")

TASK(/CPMB/DEFAULT_FORMULAS_LOGIC,REPLACEPARAM,SRC_TIM%EQU%%SRC_TIM%)

 

In case the hierarchy browsing is needed the PROMPT command is a type of COPYMOVEINPUT:


DS:

PROMPT(COPYMOVEINPUT,%SRC_TIM%,%TGT_TIM%,"Select the TIME members from to","TIME")

TASK(/CPMB/DEFAULT_FORMULAS_LOGIC,MEMBERSELECTION,SRC_TIM%EQU%%SRC_TIM%%TAB%TGT_TIM%EQU%%TGT_TIM%)

 

Disadvantage of above option that support the hierarchy browsing is that in case there are several dimension values to be entered they are placed on a new pop-up window.

In case use just prefers to have a one pop-up window with all the values following option can be used. It is again the PROMPT command type of SELECT:



DS:

PROMPT(SELECT,%SELECTION_IN%,,"Select data to copy","TIME,VERSION")

TASK(/CPMB/FX_RESTATMENT_LOGIC,REPLACEPARAM,SELECTION_IN%EQU%%SELECTION_IN%)

 

More information:

TEXT Prompt() Command

COPYMOVEINPUT Prompt() Command

SELECT Prompt() Command


Saturday, June 1, 2024

SAP Datasphere (and BW bridge) limitations

When comparing SAP Datasphere to the traditional SAP BW system, several technical limitations become evident. Here are some of them listed out:

 

1 Mature Feature Set / Functionality Gap

SAP BW has a more mature and extensive feature set due to its long history, including advanced data modeling, ETL capabilities, and built-in analytics, which SAP Datasphere may lack. As an example:

·        OLAP Engine/BW Analytical Manager functionality not supported e.g., no analysis authorizations, no query as InfoProvider, no query execution, no calculation of non-cumulative key figures (Inventory management)

·        No add-ons (e.g. SEM-BCS) supported

 

2 Data Modeling related to SAP HANA

·        Generation of External SAP HANA Calculation Views not possible

·        Not possible to use SAP HANA Calculation Views as part of Composite Provider

·        No planning scenarios supported

·        Temporal joins in Composite Provider supported

·        Many processes used process chains are not supported (e.g. ABAP program execution process, archive, job event, BODS related processes etc.)

·        Ambiguous Joins not supported

·        BADI Provider as PartProvider not supported

·        Open ODS View without calculation scenario not supported

 

3 Data Integration

·        Connection to source systems supported only via ODP technology as well as push scenarios only.

 

4 Performance and Scalability

·        Current sizes of datasphere instances are limited to 4TB, which may not be enough for some organization that runs bigger BW systems than that.

 

5 Reporting and Analytics

·        No BW (BEx) query support

·        User Exit BW (BEx) variables

·        Unit conversion

·        Constant selections in BW (BEx) reports

·        No BW (BEx) Query variables support in DTP’s filters

 

6 Application development

·        Applications development is not supported. Only applications to be done using SAP BTP.

 

7 UI/UX

·        No SAP GUI access

 

 

These technical limitations highlight the areas where SAP Datasphere is still catching up to the more established and mature SAP BW system. As SAP continues to develop Datasphere, many of these limitations may be addressed over time.

Tuesday, April 30, 2024

aDSO: Validity and Reference-Point tables

As mentioned in my post Storage data target type in BW Request Management there are several tables that store data for aDSO objects. Except those very well know like active, inbound, change log tables there are also others.

In case the aDSO is type of Inventory:



There are also below two tables available (followed by namespace):

Validity Table: /BIC/A<technical name>4

Reference Point Table: /BIC/A<technical name>5

 

A purpose of the inventory-enabled aDSO is to manage noncumulative key figures. A non-cumulative measure, in the context of data analysis or statistics, refers to a metric or variable that does not accumulate or aggregate over time or across categories. In other words, it represents a single point or snapshot value rather than a total or sum.

E.g. if there is a need to analyzing sales data for a particular day, the number of units sold on that day would be a non-cumulative measure. It doesn't consider sales from previous days; it just reflects the sales for that specific day.

Non-cumulative measures are often used when you need to examine data at a specific point in time or within a specific category without considering historical or cumulative values. They are particularly useful for analyzing trends, patterns, or comparisons within discrete units of analysis.

The tables are available also via t-code RSMNG in Utilities menu: Display Validity tab and Reference-Point tab:



Friday, March 29, 2024

Activation of SNP Glue objects after transport

In my last blog I wrote about SNP Glue transport.  Once the Glue objects are moved to the target SAP system there is one more step to be executed before the objects are ready to be used. It is so called objects activation step. It can be performed in dedicated t-code (/DVD/GLTR - DataVard Glue Transport Workbench). There is a button called Activate Request in the toolbar.


Once you hit that button you get a pop-up window to provide a TR which objects are to be activated.


Next pop-up lists out all the objects that are in that TR. One can choose if particular objects is to be activated or not. Also there is a possibility to delete the Glue objects by selecting Delete checkbox. One more check box is to be checked if the objects are new and are to be created/changed. 


Once this pop-up is confirmed the objects are being physically activated and afterwards the objects are visible in Glue Object navigator – t-code /DVD/GL80 and are ready to be used.

Transports of SNP Glue objects

In my previous post related to SNP Glue integration tool I described how the tool can be leveraged to transfer data between Snowflake and SAP BW/4HANA systems. However I left out a part on how to transport the SNP Glue objects. Thus I write about it in this post.

There is a dedicated t-code (/DVD/GLTR - DataVard Glue Transport Workbench) to manage transports the objects developed for the Glue.


The Glue transport t-code allows to collects the objects for the transport either by package/user name/object name. Once the objects are entered in the selection screen there is a button Execute. Once it is selected the objects are written to the regular SAP STMS transport request. Here’s such a request looks like:



Entries in the TR are basically the transparent table entries. These values in the table entries will be transported to the target SAP system. This is because the Glue transport contains meta data objects (objects definitions), further there are ABAP objects like classes and programs. Based on this meta data - the physical Glue objects are recreated in the target system upon objects activation.

Below are some of the table entries that carries the meta data per particular Glue object:

/DVD/GL_T_DD_TH – Glue table header objects

/DVD/GL_T_E2_MAP - Persistent Transformation/mapping

/DVD/GL_T_E2_OBJ - Extractor 2.0 object directory

/DVD/GL_T_E2_OBP - Extraction object's parameters

/DVD/GL_T_FLD_OB - Glue objects mapped to logical folder

/DVD/GL_T_FOLDER - Table for mapping folders inside packages

 

Once the objects are collected into the TR via t-code /DVD/GLTR then the TR can be regularly released like any other SAP transport in e.g. SE10 t-code. At this point the TR is ready to be imported to target SAP system.



Sunday, March 24, 2024

Simple Snowflake to SAP BW data flow via SNP Glue

I was recently involved in a data transformation project leveraged SNP Glue tool. The SNP Clue (formerly Datavard Glue) is designed to integrate and connect various SAP systems and non-SAP systems, facilitating data exchange, synchronization, and consolidation.

In my scenario we used the Glue to transfer the data located in Snowflake system in the cloud to on-premise based SAP BW4/HANA system. There are a couple of the Glue objects that need to be developed to enable the information exchange between the Snowflake and the SAP BW/4HANA.

1. Glue Storage

2. Glue table

3. SAP table

4. Glue Fetcher

5. Glue Consumer

6. Glue Extraction Process

7. SAP BW Datasource

8. SAP BW target infoprovider – e.g. aDSO

 

All these objects are developed in SAP BW system where the Glue tool is installed. Main component of the Glue is a Cockpit (SNP Glue Cockpit) that can be access via t-code /DVD/GLUE (ABAP program /DVD/GL_MAIN). Here all other parts of the Glue can be accessed from. The Glue can be also installed as add-on too. In this case there is a component called Glue in the particular installation of the SAP BW.



1. Glue Storage – it is a central object that encapsulates all information needed to connect to the remote object where the data will be read/write – from/to. The Storage is an SNP objects that is shared between their tools (e.g. SNP Outboard). Below are settings that are needed to be provided to the Storage object in order to connect to the Snowflake. There is a dedicated t-code (/DVD/SM_SETUP) to maintain the storage. There are 2 storages that need to be defined for connection to the Snowflake:

1.1 Internal Glue Storage

Storage ID – logical name, defined by the Glue developer, usually must follow a naming convention given by particular developer guideline

Storage type – predefined as SNOW_STAGE, type binary

Description – free text, should be describing e.g. meaning of data that is being transformed

Java connector RFC – id of connection to JAVA connector (JCo). The JCo is mandatory and it must be installed in SAP BW system in order to use the Glue. Similarly as Storage it-self the JCo connection are shared across different SNP tools. There is a special t-code (/DVD/JCO_MNG) to setup and control the JCo connections.

Account name - name of the Snowflake account, in the format: <account name>.<region>.<platform>

User role – haven’t used this

JAVA Call Repeat – 0 by default

Repeat delay (seconds) - 0 by default

Driver path – path to the JDBC Snowflake driver, located at SAP server

Connection pool size – 0 by default

Username – user at Snowflake side

Password – of the user above

 

1.2 Glue Storage

Storage ID - logical name, see above in internal storage. Notice that the ID is different from the ID used in internal storage.

Storage type - predefined as SNOWFLAKE, type TAB, for transparent storage

Description – see in internal storage

Referenced storage – id of internal storage, main storage uses internal one

Java connector RFC - see in internal storage

JDBC Call Repeat – 0 by default

JDBC Repeat delay (seconds) – 0 by default

Account name - see in internal storage

Warehouse – existing warehouse in the Snowflake account, WH that will be used to perform computing operations like SQL in the snowflake

Database name – name of the database in the Snowflake

Database schema – name of the database schema in the Snowflake

User Role – user role in the Snowflake

Driver path – Snowflake drive path in SAP server

Hints - string that is added to connection string when JDBC driver establishes the connection

Connection pool size – 0 by default, number of connections that can be kept open in the pool

File Type – CSV or Parquet, I used CSV type

Table name prefix - prefix of all Glue tables created within this storage

Use Snowflake App for data merge – haven’t use it, if enabled, the SNP Native app is used

Wrap values in staged CSV files - haven’t use it

Data delivery guarantee – EO (Exactly-once), Data transfer behavior

 

All other Glue objects listed below are maintained in Object Navigator part of the SNP Glue Cockpit. The objects navigator can be accessed by dedicated t-code - /DVD/GL80.


2. Glue table – can be also maintained in dedicated t-code /DVD/GL11. The Glue table is metadata object, which represents the remote object (Snowflake view in my case). It allows to work with the data from that remove object in the SAP landscape. It contains a list of the columns from table or view of in remote DB. Again in my case it is a list of the columns in my Snowflake DB’s table. The Glue Table must be activated before it can be used in other Glue objects. This table doesn’t store data at SAP side persistently. The data is only read from remote DB when the extraction process runs.

 

3. SAP table – SAP system DDIC table that stored the data physically once it is fetched from the remote DB. The data is stored persistently in here.

 

Below objects (fetcher, consumer and extraction process) are part of the Glue Extractor 2.0.

4. Glue Fetcher - allows data transfer from source to target objects. It refers to the SAP Glue Table objects. It defines weather particular columns in remote table/view is used for selection. Further it defines what delta mechanism type is used (FULL, DATA, TIMESTAMNP, VALUE, VALUE_ DIST). In my case I used FULL extraction and cursor as well.


5. Glue Consumer – comes into the picture when the data is written into the target. In my case it specifies the SAP Table I created in step no 3.


6. Glue Extraction Process - object responsible to run whole process from reading the data from the source (by the Fetcher) to writing the data by Consumer. The Extraction Process can also be used to manage the execution of extraction and monitoring of launched extractions. It has similar capabilities like running DTP process or process chain in SAP BW.

Here the Fetcher and Consumer need to be specified during a design time. Whole process is driven by generating a Z* report. You can find the report name in Generated report name field. There is also possible to define a data transformation by specifying the rules and start/end routines. In my case I haven’t used any routines neither any transformation. I used just pure 1:1 mapping.


7. SAP BW Datasource – in my scenario I created a custom DS to be able to store data into the BW aDSO object. The DS was based on the SAP table object I created in step no 3.


8. SAP BW target infoprovider – e.g. aDSO that is my final storage of Snowflake data in the SAP BW. This is an object I used further for reporting – over composite provider.

 

To run the data replication from the Snowflake to the SAP BW/4HANA I used a process chain. The 1st step in the chain was to run Glue extraction process. By this I got the Snowflake data to SAP Table. From there I used DTP to load the data to my aDSO.

 

More information:

SNP Glue product page

SNP Glue documentation