Monday, March 27, 2023

Package based aDSO activation

In SAP BW, aDSO (advanced DataStore Object) or DSO (DataStore Object) is used as a persistence layer for storing and processing data. In case of e.g. standard DSO a data activation is the process of moving the data from inbound table of the DSO to active table. Data is aggregated accordingly within this process. See Flavors of aDSO object for more details.

In case there is a huge number (10k+) of data loads requests in specific aDSO objects and they all are being activated together there can be problems. Mostly with the respect to memory issues.

This is solved in the newer BW releases (7.50, BW/4HANA 1.0, BW/4HANA 2.0, BW/4HANA 2021) while introducing so called package based activation. This type of the data activation is available when the activation runs over Process chain – by process variant "Clean Up Old Requests in DataStore Objects (Advanced)". Here in this process s package size can be defined for specific aDSO object by a user. If it is not defined, a default package size of 10k is used.

The value itself is stored in table RSPCVARIANT (Generic Variant Storage):

The new way of the package based data activation only works when it is triggered from the in process chains via above mentioned process variant. In case the activation is performed manually, either via t-code RSMNG or via BW/4 Cockpit the activation work in the old way – no package size used.



More information:

3108217 - ADSO: packaged activation of requests

3038366 - Activation of requests in an aDSO with a huge number of request fails

Thursday, March 23, 2023

Watermarks of a BW data target object

Watermarks in BW can be considered as set of an internal counters (so called watermarks) of a technical information of the BW data target. Following are the BW data target objects that may have the watermarks: InfoCube, DataStore Object, Master Data Table and Text Table.

There is an ABAP program (RSPM_ADSO_WATERMARKS) in SAP BW systems that displays the aDSO watermarks like:

·        Number of all requests in aDSO

·        Number of nonactive requests in aDSO

·        Number of deleted requests in aDSO

·        TSN of lowest active request (AQ) in aDSO

·        TSN of high
est active request (AQ) in aDSO

·        AQ Status

·        TSN of lowest active request (AT) in aDSO

·        TSN of highest active request (AT) in aDSO

·        AT Status

·        Count

·        Count AT

·        Data in aDSO must be activated

·        Proc. Type

·        Process ID

·        Datamarted source TSN

·        Requests not datamarted

·        Delta information


The program gathers information from many tables like RSOADSO, RSPMREQUEST, RSMDATASTATE_DMO (DMO - DataMart Out), RSMDATASTATE_DMI (DMI - DataMart In) and others. For non BW4 optimized objects like classic infocube there is an function module RSSM_SHOW_WATERMARKS that gathers the watermarks for those objects. 

Selection screen of the report RSPM_ADSO_WATERMARKS:

Example output screen of the same report:

Monday, March 20, 2023

Reassigning of Process Chain’s InfoArea

When I needed to reassign a process chain (PC) between infoareas (IA) I was always struggling. Only way, how to do it I knew; was a drag and drop type of thing in backend t-codes like RSA1, RSPC1, etc. In this way, I moved my PC within the IA’s tree from current IA node to new one. In case a distance between the old and new node was quite big I ended up with moving the mouse until I got into the new IA. And sometimes it took even couple of minutes to get there.

However, with an introduction of BW Cockpit the PC’s IA reassignment gets a lot more easier. There a functionality built in the BW Cockpit to do it very easily. While in Process Chain Editor part of the BW Cockpit there is a Properties button on top right part of the page.

The button shows up a dialog window where an InfoArea can be easily changed to new one among other attributes of the PC. The information of the assigned IA is saved in table RSPCCHAINATTR and column APPLNM.

More information:

How to find out InfoArea for particular BW object type – in BW/4 systems?

BW’s InfoArea vs Application components

BW/4HANA Cockpit

Other posts on Process Chains topic

Monday, March 13, 2023

SAP Datasphere is the new SAP DataWarehouse Cloud

This week on March 8th 2023 the during a SAP Data Unleashed event SAP announced a new solution called SAP Datasphere. What it means is that Datawarehouse Cloud (DWC) becomes SAP Datasphere.


What led SAP to announce this solution? Basically, they are trying to address today’s challenges of data architecture: from data warehousing (structured data) to data lakes (unstructured or any kind of data) and beyond to data fabric (integrated layer (fabric) of data and connecting processes) reaching to particular challenges like data federation, cataloging, lineage, metadata, integration and semantic modeling of data.

How does the SAP address these kind of challenges it? By mixing a portfolio of their existing products like:

DWC – Data warehouse solution in cloud that evolved from SAP BW (BW/4), customers can move their BW models to Datasphere/DWC via SAP BW Bridge (BWB), thus investments made into BW are safe.

SAP Analytics Cloud (SAC) – Datasphere is integrated into SAC by supporting its analytics and planning use cases.

SAP Data Intelligence Cloud (SAP Data Intelligence formerly SAP Data Hub) – Datasphere leverages its Data Catalog functionality and engines for data moving.


And in addition, solutions from their partners like below to support the Business Data Fabric:

Databricks – provides data lakes platform called lakehouse (data warehouse + data lake) initially based around Apache Spark.

Confluent – capturing data in motion capabilities based on Apache Kafka.

Collibra – data governance and metadata management capabilities.

DataRobot – capabilities of AI lifecycle management, a platform for augmented intelligence – AutoML.


All these capabilities together forms the Datasphere. Although technically speaking it is a combination of DWC and SAP Data Intelligence Cloud. The Datawarehouse Cloud is rebranded to the Datasphere claiming the Datasphere to be a next generation of the DWC. Simple speaking features of the data integration, data cataloging, and semantic modeling were added into the DWC to enhance its data discovery, modeling, and distribution capabilities making it the Datasphere.

Data can be either replicated into the Datasphere or federated. SAP emphasizes an approach of data can sit anywhere just its analytics runs in the Datasphere. This is crucial point as running the data warehouse in the cloud may not be scalable easily. Thus, keeping data in its source and not replicating it may sound a better options.

User of the Datasphere digs into so called Datasphere Catalog to find a data of his/her interest. Leveraging its lineage capability a relationships between the different data can be explored. The Catalog supports data objects from other SAP Datasphere instances and SAP Analytics Cloud. This should be expanded soon to other SAP apps (like BW, ECC, S/4) plus non SAP apps via its partners. While accessing data like this the data from one source can be enhanced/mixed with data from other sources just by the user in Datasphere Spaces. Assuming here that the spaces are next generation of BW workspaces.


My take

Nowadays organizations are processing data outside their enterprise systems more and more. Therefore, a solution to enable users to work in analytics area using data from “anywhere” is very plausible. This is not a new for SAP. Somewhat similar picture was painted when Data Intelligence/Data Hub came on board. Seeing the Datasphere as a successor to those initiatives (DI, BW, DWC) plus having its AI powered capabilities the idea of the business data fabric perhaps may come true in future having a kind of “chatGPT” style of analytics.


More information:

SAP DataSphere microsite


SAP Data Unleashed event

SAP Data Warehouse Cloud (DWC), SAP BW Bridge (BWB)

Wednesday, March 8, 2023

What are types of ABAP development packages?

Development package serves for purposes of structuring ABAP development objects into logical units.  In addition to organizing of the ABAP objects while the development packages can be organized in other packages it enables SAP software logistics to function.

Organizing of packages within other packages is enabled by the type of the development package. There are following three types (as per domain MAINPACK):

'S' - Structure packages

'X' - Main packages

'' - Development packages


A purpose of the structure packages is to act as top level container in the package hierarchy that defines architecture of the objects within the subordinate packages. This type of the package does not contain any development objects itself. Instead, they can contain package interfaces and subpackages.

A purpose of the main packages can be seen as group of semantically similar objects. As well as the structure packages the main one do not contain any development objects itself as well.

Development package groups development objects itself as per their types (e.g. domains, data elements, programs, includes, classes, etc.)

Main packages, with the exception of your own package interfaces and subpackages, cannot contain other repository objects. Subpackages, in turn, can be other main packages and the standard development packages.

Package are being created in t-code SE21 Package Builder. All the packages are stored in table TDEVC.


What development packages can be seen in the SAP NetWeaver/ABAP Platform based systems?

$TMP – Local or temporary objects, most visible one. All objects that are not needed to be transported into other system go in here.

Z* or Y* – custom development packages. All objects created by customer are stored in those packages.

$HV – Generated Help View Program

$MC – Generated Matchcode Programs/Functions

$ENQ – Generated ENQUEUE Function Modules

$GEN – Other generated objects

$SWF_RUN_CNT – Workflow Container: Generated Data Types

$EQ_GEN – Stores BW Easy Query generated objects. The package is created automatically upon first easy query generation. It includes all generated BW Easy Query objects such as the function modules.


More information:

ABAP Packages

SAP glossary

Friday, March 3, 2023

BW Background Job - Cleanup of orphaned DTIS entries

DTP may run in DTIS (Data Transfer Intermediate Storage) mode. That mode works in a way that extracted data is stored in in intermediate storage before they are updated to target.

The Intermediate Storage contains many records even though the corresponding DTP requests are already deleted or are in green status without any error. Thus the intermediate storage can grow up fast and it can be a problem while having many DTPs in the BW system running in that mode.

There is an ABAP program RSDTIS_CLEANUP that can run as a job to clean up the Intermediate Storage of DTIS DTPs that belong to deleted requests.

The report can run for specific DTP or for multiple DTPs that are identified either by source object (aDSO, DataSource, Query Element, InfoSOurce, Composite Provider or InfoObject) or by Unique DTIS Number. The Unique DTIS Number (RSDTIS_NUMBER) is a sequential number that belong to specific execution of and DTP in the DITS mode. All the Unique DTIS Number are stored in table RSDTIS.

More over there is a checkbox that enables a deletion of unused DTIS tables.

More information:

2708942 - Cleanup of orphaned DTIS entries

SAP BW DTIS - Data Transfer Intermediate Storage

Wednesday, March 1, 2023

Collections of BW Background Jobs

SAP BW (Business Warehouse) background jobs are automated tasks that are scheduled to run at regular intervals without requiring manual intervention. These jobs are used for various purposes in the BW system, including data loading, data transformation, and system maintenance. Regular SAP BW background jobs are essential for ensuring the efficient operation of the BW system. They help to automate repetitive tasks, reduce the risk of human error, and improve system performance. They can also help to ensure that critical data is available to end-users when needed.

Here are some of the most common types of regular SAP BW background jobs:

Data loading jobs: Used to load data from source systems into the BW system. They may include jobs that extract data from external systems, jobs that transform data into the correct format for BW, and jobs that load data into BW data targets.

Data transformation jobs: Used to transform data from one format to another. They may include jobs that perform data cleansing, data validation, or data enrichment.

System maintenance jobs: Used to maintain the health and performance of the BW system. They may include jobs that perform database maintenance, system backups, or system health checks.

Archiving jobs: Used to archive data that is no longer needed in the active BW system. Archiving helps to reduce the size of the active BW system, which can improve system performance.

Broadcasting jobs: Used to generate reports based on data stored in the BW system.

Here’s a list of my blog posts related regular background jobs in BW:

1. Cleanup of orphaned DTIS entries

2. Deletion of Orphaned Entries in Errorstack/Log Job RSBKCLEANUPBUFFER



5. Deletion of BW Messages/Parameters

SAP Profitability and Performance Management

It is SAP solution designed to analyze financial performance and profitability of an organizations. It provides advanced analytical tools and reporting capabilities to help businesses identify the factors that drive their profits and optimize their performance. It is also known as SAP Performance Management for Financial Services or SAP Profitability and Performance Management Cloud or as SAP PaPM that is abbreviation of Profitability and Performance Management. Sometimes it is also referred as NXI (software component) as it is being developed by partner company called NEXONTIS (part of msg global solutions ag).


More information:

PaPM microsite

Help site

BW Background Job - Deletion of BW Messages/Parameters

BW background management provides basic functions for functions for managing background and parallel processes in the BW system. Collection of the background management plus logs and tools is what is BW data processing all about. Simply speaking it comprises of all the tasks that runs as the background jobs that are performing BW tasks like data loading, parallelization, indexing, filling aggregates, archiving, attribute change run, broadcasting, compression, data activation, rollups, remodeling, collecting statistics, logs, messages, etc. All the BW background management jobs are visible in t-code RSBATCH.

All these tasks running all the time in the BW systems are producing a great amount of logs and messages. The table that is used to store those logs and message is RSBATCHDATA. In order to prevent the table to grow a lot there is a job that can do a housekeeping activities. The job is usually called BI_DELETE_OLD_MSG_PARM_DTPTEMP and it runs the ABAP program RSBATCH_DEL_MSG_PARM_DTPTEMP. It deletes old messages, parameters and temporary DTP Data etc. On its selection screen, it is possible to specify the no of days as a retention period (delete all records older than X days, where X is from zero to 999) for messages, logs and DTP temporary data. It usually runs on daily basis.

The job can be scheduled from t-code RSBATCH -> under Settings and Parallel Processing part -> Deletion selection.

More information:

Tuesday, February 28, 2023

BW Background Job - Deletion of Orphaned Entries in Errorstack/Log Job RSBKCLEANUPBUFFER

Another background job that is running in the SAP BW systems is to housekeeping of orphaned entries in DTP’s error stack and execution logs. It is the background job called RSBKCLEANUPBUFFER.

It deletes any existing temporary buffers and error stack data of deleted requests for which this data still exists. The report behind this job is also called RSBKCLEANUPBUFFER. On its selection screen there are dates from/to and check/delete only requests that defines time frame for deletion itself.

More information:

1759601 - P30:DTP:Request temporary storage is deleted too soon

Activating the BW Software Component

As per different operating modes of BW there is a need to activate SAP_BW component in the NetWeaver based systems to indicate that BW is configured/used. There is a dedicated database table that indicates what software components of the NetWeaver are used.  The table is called CVERS_ACT (Active Software Components in the System) and indicator is ACTIVE column. If the ACTIVE columns is set to X the particular component is used within the system.

There is e.g. Function module DELIVERY_SET_ACTIVE_COMPONENT or RS_SET_ACTIVE_COMPONENT_FLAG that sets the ACTIVE column for specific software component.

More information:

Operating modes of SAP BW

Operating modes of SAP BW

In one of my previous blog posts regarding BW client strategy I briefly mentioned a types of the BW system. The type of BW system or let’s say operating mode of the BW defines the scenario which the system is executing and for which it was setup. The SAP BW system can operate in multiple modes.

Most common is Enterprise DataWarehouse mode. It is full blow BW system, running on dedicated server, has its own client as it is running on its own server separated from other SAP systems. The system is capable of loading large scale of data; the data is consumed by different front-end clients. It has set its own client. The component SAP_BW is active in table CVERS_ACT. In the system, there are a repositories of different BW objects like data source, queries, etc. (see TLOGO objects).

Embedded BW mode. The SAP BW functions are automatically available in SAP ERP systems since SAP NetWeaver 7.0. The BW runs in the same server as the ERP thus it is called ‘Embedded BW’. See more details here. Data is loaded usually into separate BW client other than the one for ERP part of the system. The component SAP_BW is active in table CVERS_ACT.

Embedded Analytics mode. Only analytics part of BW is active within the S/4HANA system. Analytical engine of the BW is using CDS views in order to perform OLAP reporting directly on the user data. The component SAP_BW is active in table CVERS_ACT. Separate client is technically not needed but it is advised to have it. There is no datasource repository in such the system.


More information:

BW client strategy

What is TLOGO in SAP BW terms?

Embedded BW 1

Embedded BW 2

Monday, February 27, 2023

BW Background Job - DTP Clean Up Job RSBKCHECKBUFFER

Among many other background jobs running in SAP BW system is the RSBKCHECKBUFFER job. This job runs a ABAP program RSBKCHECKBUFFER  that cleans up the DTP Runtime Buffer. What is meant by the DTP Runtime Buffer is a temporary storage of the Data Transfer Process. Basically there are settings for the DTP (like 'Fill Temporary Storage After Following Substep' -> see details here). If those settings are maintained, they represent a retention time for how long the temporary storage should be kept in the BW system.

Now there is a function in the BW system needed that makes sure that those retention periods are fulfilled. The function is represented by the RSBKCHECKBUFFER  job. The job has to be periodically planned. If it is running, it deletes the entries, once the retention time is reached. Similarly, the job handles also the DTPs in the case if the settings of the active version of the DTP do not contain any retention time. Usually it is enough to schedule the clean up job to run every day.


More information:

Online docu


Thursday, February 23, 2023

Client strategy for SAP BW

While SAP BW system is being installed, a decision needs to be done on a client strategy for the system itself. How to decide which client is to be used as BW Client? One important thing to mention is that SAP BW handles only one client in the particular system (see You can only work in client 001). This is contrary to SAP ERP systems where such the system can have multiple clients (see How to get rid of clients 001 and 066 ?). Sometimes the client strategy is referred by a term Client Distribution. Technically the BW client is stored in table RSADMINA and in column BWMANDT. There are following function modules to work with the BW’s client ID:

·        RS_MANDT_UNIQUE_GET Determine BW Client

·        RS_MANDT_UNIQUE_SET Define BW Client Permanently

As a rule of thumb it is recommended to have BW dedicated client in cases like:

·  If data is physically replicated from SAP application to Embedded BW, both the another SAP app and the Embedded BW runs on the same system

· Organization reasons (data separation due to different companies e.g. holding structure, etc.)

There are several aspects to be considered. One of the major one is depending on what type of BW systems it is. There can be following cases:

1 Standalone BW – a full blow BW system, running on dedicated server, has its own client as it is running on its own server separated from other SAP systems.

2 SAP S/4HANA embedded analytics using ABAP-based CDS and the Analytical Engine (automatically generated transient BEx providers). In case organization is using this scenario heavily for many use cases, it is better to create dedicated client for analytics.

3 SAP S/4HANA BPC Optimized using SAP BPC on PAK models. No separate BW client needed unless (similar to the case above) there will be a use cases combining external data with the S4/HANA data.

4 Embedded BW – BW included in SAP ERP systems since SAP NetWeaver 7.0, see more info here.

4.1 Embedded BW usage creating small customer-specific data models for reporting and planning (incl. BPC on PAK), including external data sources to stage data into the Embedded BW (other SAP system, e.g. other SAP ERP/CRM/etc., stand-alone BW or non-SAP legacy system) – no usage as Enterprise Data Warehouse.

4.2 Embedded BW usage to stage data from the same SAP S/4HANA system - separate client for Embedded BW is recommended. E.g. there are multiple S4/HANA clients (dev, testing, training, etc.) along one client dedicated to the Embedded BW.

More information:

Embedded SAP BW Definition and Positioning

You can only work in client 001

Embedded BW 1

Embedded BW 2

What is embedded BW (part 2)?

I dedicated a blog post to the topic of Embedded BW in past already. However, the topic is quite broad so I decided to write a follow up.

First to sum up a purpose of the SAP Embedded BW (Business Warehouse). It is a data warehousing solution that is integrated within the SAP Enterprise Resource Planning (ERP) system. It allows organizations to collect, consolidate, and analyze data from various sources within the SAP environment, as well as external sources.

As it is “Embedded” BW into NetWeaver based systems (such an ERP (SAP Business Suite, SAP S/4HANA), SAP CRM, etc.) it means it provides a real-time view of an organization's data, enabling decision-makers to make informed business decisions. It provides advanced reporting and analytics capabilities, including ad-hoc reporting, OLAP analysis, and predictive analytics. These capabilities enable users to gain insights into business performance and identify trends and patterns in the data.

One of the key benefits of Embedded BW is that it eliminates the need for separate data warehousing infrastructure and reduces the cost and complexity of managing multiple systems. It is also designed to be scalable, allowing organizations to expand their data warehousing capabilities as their data volumes grow.

However, the Embedded BW just supports processes related to financial planning, consolidation and data analytics but it is not intended to be used as data warehouse solution. For pure enterprise data warehousing (EDW) purpose the SAP BW should be used.

Current version of BW used in the embedded BW SAP BW 7.5 powered by SAP HANA. As per SAP, (see Embedded SAP BW Definition and Positioning document) there are currently no plans to enable SAP BW/4HANA for Embedded BW.


More information:

Embedded BW part 1

2935516 - Embedded BW is not intended to be used for data warehousing

2773652 - Recommendations embedded SAP Business Warehouse

1972819 - Setup SAP BPC optimized for S/4 HANA Finance and Embedded BW Reporting (aka Integrated Business Planning for Finance)

Tuesday, February 14, 2023

BW4 – Parameters of RSADMINA table

I described parameters of table RSADMINA (Control Table That Customer Can Change (Tenant-Specific)) table in my earlier post here. However since introduction of BW/4 systems there are more params in the mentioned table.


TLOGO_EXEC_DISAB       TLOGO Types: Automatic Execution Deactivated


BW4HANA_SWITCH         SAP BW, Edition for SAP HANA Switch, provides an information about BW4 operating mode.

Possible values:

           unknown (only internal)

0        Add-on not installed

1        Compatibility mode

2        B4H mode


TECH_SCOPE                   Operating Scope, so called Data Warehouse mode: Data Warehouse, Embedded BW, and Embedded Analytics

Possible values:


1        Lean Data Warehouse (ANALYTICS_ONLY)

4        BPC Planning only

16      Data Warehouse (DATA_WAREHOUSE)


TECH_COMPONENTS        Data element for tech. content component


HDI_MODE                       HCPR, IOBJ: Consumption Mode for HANA Native Objects. It checks if HANA DB schema name is equal to ‘_SYS_DI’.

Possible values

0        Repository Version 1 (No HDI Support)

1        Hybrid Support (Repo1 and HDI in parallel)

2        Pure HDI Support (Repo 1 is not supported)


ODATA_AUTO_REL           Release OData Service after Generation, whether BW queries are released automatically for OData consumption.

Comparision of RSADMINA table in BW/4 and classic BW systems:

More information:

Parameters of RSADMINA table

Thursday, February 9, 2023

DTP ID prefixes

Some time ago I blogged about InfoPackage / DTP prefixes and BW data request's prefixes. That blog post covered most of the InfoPackage and DTP ID prefixes however; I found there are more of them.


As there are different types of Data Transfer Processes; similarly there are several DTP ID prefixes that corresponds to those DTP types.

1 DTP_* - Regular DTP ID prefix. So called 'Standard DTP'. This is a case of most of the DTPs is any given BW system. Part identified by asterisk is generated as a random string. Sometimes the standard DTPs are referred as DTPA DTPs.

2 DTPV* - So called 'Virtual DTP' is created during migration to BW/4 systems by ABAP class CL_RSBK_DTP_COLLECTION.

3 DTPT_* - So called 'Transfer DTP' is generated during migration to BW/4 system.  

4 DTPI_* - So called 'Transfer InfoPackage DTP' is generated by replacing first 5 chars of the InfoPackage ID with the prefix 'DTPI_, during migration to the BW/4. Import of InfoPackages (in the case of an remote conversion / transfer), a settings of the InfoPackages for file source systems are not replicated correctly in the DTPs replacing them. The DTPs IDs start with the prefix.

5 RSBC - So called 'Command Package'.


Following to different DTP ID prefixes there are below DTP request IDs:

1 DTPR_* - request ID as of Classic BW (pre BW/4 systems)

2 DTPR__* - new request ID as of BW/4 (or BW 7.5) systems, so called Process Transaction Sequence Number (TNS)

3 DTPR_SIMULATION – in case DTP ran in debugging (simulation) mode


All these types are stored in data dictionary in Type group RSBC (Constants for the Data Transfer Process).


More information:

A little about InfoPackage / DTP prefixes and BW data request's prefixes

BW request types: RSSM vs RSPM

My blogs about DTPs

My blogs about InfoPackages

Tuesday, January 10, 2023

SAP retrofit transports

In general, a term retrofitting is referring to an addition of new technology or features to older systems. Given SAP terminology, a retrofit is a process of dual landscape synchronization.

Retrofit comes to a picture in case there is dual landscape deployed. That can be for example a regular DEV1->QAS1->PRD1 system landscape used to maintain exiting productively used SAP applications (maintenance landscape). This landscape is used to maintain the SAP apps. In parallel to that there is another landscape used to develop a new SAP app (development landscape).

As the second landscape is not productively used there are only 2 systems in the landscape: DEV2->QAS2. Commonly speaking developments are performed in the development landscape and corrections/maintenance activities in the maintenance landscape at the same time.

What would be beneficial is to keep both the landscapes in sync from the objects that they share - point of view. Again, the synchronizing of the changes between the two landscapes is called retrofitting. Retrofit helps to synchronize changed objects (customizing and workbench) from maintenance development system to project or upgrade development system in dual track landscapes. Dual landscapes are known as N 1 landscapes)

SAP Solution Manager has some certain features to support automation of retrofitting. There can be an automatic import of corrections done by SOLMAN. See details in help pages.

Now to the dual BW development landscape. Large BW deployments can have several landscapes too. Thus, it is needed that BW specific objects are moved across all the landscapes. The Solman does support also below BW objects for retrofitting: File data sources (ISFS), Transfer rules (ISMP), Transfer structures (ISTS), Data source (RSDS), Transformation (TRFN), Routines (ROUT) and BW formulas (RSFO).

There is one more interesting information about the retrofit concerning one specific BW object type – BW Transformation. There is special OBJVERS type called R (probably stands for Retrofit) for the Transformation. In case particular transformation is deleted, the R version of it is kept in the BW system. The retrofit entry is always generated in the system. Its purpose is to be used for retrofit transport. Anyhow if for any reason one would like to get rid of it there is a SAP standard program provided to delete it: RSTRAN_TRFN_DELETE_R_VERSION

More information:

Solman - Retrofit for BW