Thursday, January 29, 2015

Multitenancy in HANA

Motivation: I heard a lot of regarding multitenancy recently. In this blog I wanted to sort out basically what the multitenancy is especially with regards to HANA. Eager to hear from you regarding the topic and even to correct me!

Release SPS09 of SAP HANA 1.0 came into spotlights in October of last year. It was flourished that one of the main features of this release is multitenancy. Term tenant is very important within today’s very hot cloud computing. The multitenancy in software architecture terminology means that a single instance of software runs on a server and it is serving multiple tenants. The tenant is a group of users sharing same view on the software they using. Software designed according the multitenant architecture provides every tenant a dedicated share of instance its data, configuration, user management and tenant specific functionality.

To put it simple imagine it as multiple customers running same app on the same server. An infrastructure is shared among them and there is software infrastructure that keeps their data separated even the data is stored in the same database.

It can be said even simpler. To compare it with classic NetWeaver ABAP stack – the tenants are clients in SAP system. All the data is separated in the clients but all system configuration and metadata (e.g. ABAP Data Dictionary) is shared. ABAP system is multitenancy system. All SQL statements like Selects are by default only taking the data from that particular client where they run.

In contrast there is another approach called multi-instance architecture. In this case several separate software instances operate on behalf of different tenants. Note: some of definitions used above are quotes are from wiki.

When it is said that HANA is capable of multitenancy it means there is one single HANA system with its database. In this DB there is configuration stored. Configuration is meant from system wide landscape information which allows configuration and monitoring of the overall system. Additionally there can be N tenant databases in that HANA system. These are storing application data and user management. App and user data is strictly isolated from each other. They share only hardware resources as they all run on the same instance. Users of one tenant database cannot access application data of another tenant. From DB backup and recovery perspective – it is done for every tenant independently from one to another. The HANA’s multitenancy feature is called Multitenant Database Containers (MDC).

By the MDC it is meant that there is one HANA system represented by one SID. Such a system supports multiple applications in different databases and schemas.

Note: these are quotes from SAP Note 2096000 - SAP HANA multitenant database containers.
Following are options available in HANA SPS09 with regards to multitenancy:

Standard deployment
Multiple Components One Database (MCOD)
Multiple Components One System (MCOS)
>1 (one DB per system)
Multitenant Database Containers (MDC)
>1 (one DB per system)

Further information:
2096000 - SAP HANA multitenant database containers - Additional Information
2075266 - SAP HANA Platform SPS 09 Release Note
1826100 - Multiple applications SAP Business Suite powered by SAP HANA
1661202 - Support for multiple applications on SAP HANA

Tuesday, January 27, 2015

New version of the Master Data deletion

In BW 7.0 SP23 a major change related to master data deletion was introduced. An aim is to overcome issue with the MD deletion like performance, memory overflows, dumps etc. The change is visible on dialog of master data’s IO deletion. The dialog box iis available e.g. in tcode RSA1 -> right click on IO -> Delete Master Data:

There are few more options available for the deletion now. I’m not going to the details about them. I refer you to SAP Notes listed below to get the details.

Just one more information is related how to switch this functionality ON. It can be done in RSA1, menu Settings -> Global Settings. Thick ON check box called: 'Pack.Mas.Data.Del'. If you do not have the check box available they you are running on newer version of BW because as of release 7.3 this is default deletion option. So to choose new MD deletion option is only available in release below 7.3

More information:
1370848 - New Master Data Deletion - Information
1705824 - Old master data deletion is obsolete

IO's Attributes flag: Delete master data with 0RECORDMODE

I noted recently a new flag on IO’s attribute maintenance screen. It is available in tcodes like RSA1 or RSD1 while IO is displayed on its Attribute tab. Help key F1 doesn’t say much about it just the same of the flag name itself:

Delete Master Data with 0recordmode

In order to find out what it is about I tried to create new IO and check the flag on. What happened was that it added IO 0RECORDMODE as new attribute of the IO. Similarly if 0RECORDMODE is added manually into the list of IO’s attributes the flag gets turned ON automatically.
I tried to find out what is database table where the flag is stored. I thought about table like RSDIOBJ - directory of all InfoObjects but there was no indication at all that this flag is stored in here. I tried couple of other tables without any success. After same debugging I found out that there is no table which holds this flag. Presence of flag is determined on the fly while system is checking whether particular IO has 0RECORDMODE as its attribute. In case 0RECORDMODE is there the flag gets checked on. There is following standard ABAP method IS_RECORDMODE_CONTAINED of class CL_RSD_IOBJ_UTILITIES which does that via following statement:

p_recordmode_exists = cl_rsd_iobj_utilities=>is_recordmode_contained( i_iobjnm ).

The code of the method goes down to the DB table called RSDBCHATR and checks for field ATTRINM; if there is a value equals 0RECORDMODE. The field ATTRINM is retrieved by e.g. Function Module RSD_IOBJ_GET in its export table E_T_ATR.

OK, so I now knew same basic facts about the flag. But what it is really doing? As next I turned to SCN to check if there is something about this feature. I found few forum threads: here and here but they do not say much about it. I continued to search on SAP Notes/KBAs. Here I found following KBA “1599011 - BW master data loading: Changed time-dependent attribute is not updated into master data table from DSO”. And here it was.

The flag introduces delta handling in master data update. Having 0RECORDMODE in particular IO’s attributes enables to recognize new records (N) for the delta load. In case data flow of particular master data IO has an underlying DSO object which can provide IO 0RECORDMODE it can be leveraged to process deletion of master delta. As long as it is mapped to the same IO’s attribute.

As per the KBA this feature was introduced in SAP BW 7.11 (SAP NW BW7.1 EhP 1). 

Monday, January 26, 2015

Options of SAP Lumira and where to download Lumira?

I introduced Lumira in my earlier post here. It is basically visualization software that enables to tell story about the data. Speaking simpler it is nothing else just BI frontend tool. To play with it just need and input file with any kind of the data you like which gets imported into the tool and visualization can start…

As most of today’s only SAP software; the Lumira is coming in 2 flavors:

First one is classic on premise deployment. To deploy it e.g. for educational/demo or training purposes just follow this page 32 and 64 bit version are available for MS Windows 7 or 8 platforms. Furthermore in case you are enterprise user there are following options of the Lumira:

Lumira Desktop – this is the one that you actually can download for free from above mentioned link. It enables you to develop visualizations just based on e.g. your flat files data. Again this edition is suitable for demo/training/evaluation purposes. Currently available version is 1.22.

Lumira Edge – speciality about Edge edition is to be used by departments where visualization can be shared within their members. Currently available version is 1.0. More info can be found here.

Lumira Server – capable of visualizing, story creating and sharing datasets and stories while utilizing SAP HANA repository. The Server edition is installed in HANA server. Currently available version is 1.22.

Lumira Extension - Data Access – SDK for extending and customizing Lumira Desktop's data source access capability. Currently available version is 1.22.

Lumira Extension – Visualization – SDK for extending Lumira Desktop's visualizations. Currently available version is 1.22.

Lumira Visualization Extension Plugin for SAP Web IDE Guide – Plugin into WebIDE to extend Lumira visualization using Vizpacker plugin. Currently available version is 1.0.

Cloud is second option here. At following page: the cloud version is available. Right now it is coming with 1 GB of free storage to store data and visualizations. There are options of Lumira in SAP HANA Enterprise Cloud (HEC) e.g. deployed together with SAP BI 4. These are part of Analytics in the Cloud.

More information:

Wednesday, January 21, 2015


Well, do not get me wrong but this blog post is about solution from SAP which really doesn’t exists at time of writing. It is rather research project at HPI in Potsdam, Germany. What they actually do in the project is to exploring and testing possibilities of large scale ERP systems running on SAP HANA.

The sERP is continuation of new SAP mantra called “Simple”. This was announced on last year’s conference - SAPPHIRE NOW 2014. There will be a suite of products released called as S-Innovations or sERP where S means - simplicity. What is SAP trying to do with their products is to get them simplified. This is supposed to be simplification in terms of quicker implementation, simple testing, leaner maintenance, integration and upgrades together with having a user friendly GUI/UI. For sure that a lot of things will be changed and while going down the simplification road there will be things which will be lost. These kinds of drawbacks will be there in matter of losing vast value of flexibility which is provided by customization possibilities that is built in SAP. The sERP is considered to be similar as solutions like SAP Business All in One or SAP Business ByDesing. These are ERPs which are leveraging pre-configured best practices scenarios. Other point is that as long as it is cloud based solution there are further advantages such as infrastructure and maintenance.

First product from sERP suite is already live. It is SAP simple Finance or sFIN. It was Hasso Platner's keynote in the above mentioned SAPPHIRE NOW where he shared what led to development of the sFIN. He provided example of ECC’s FI module tables which can be simplified to just few of them: BKPF and BSEG as while FI and CO modules reply on the data form these two tables. The simplification was also driven by fact that running apps on HANA there is no more necessity to have aggregates, indices and even not using of table updates just inserts. Actually some of these types of things are now becoming redundant.

What else may come in future? Well there can be more ECC modules going through simplification. We can see maybe sLOG (Logistics, or sSD, sMM, sPP, sPM), sHR or sHCM, sMfg, sQM (Quality Management) etc. Even more also components of SAP ERP can come like: sCRM, sPLM, sSCM, sSRM.. Purely speculation but ERP and ECC can even morph one day into one system even together with BI (BW) system.

SAP Cloud for Customer (SAP C4C)

Cloud, cloud and one more time cloud is everywhere nowadays. Yes, it is true. In this post I will introduce another cloud solution from SAP. It is called SAP Cloud for Costumer (C4C or CfC) and it is a suite of integrated on cloud based solutions. It aims to support customer related needs of organization which face the customers directly. Well they all do, right? So the suite of apps is dealing with sales, customer service, social media tracking and analytics, and marketing. It basically covers sales and sales beyond (CRM) related processes for sales persons in one common platform.

The C4C was formerly called SAP Customer OnDemand. Here are components (or sometimes called features) of the C4C. As side note, sometimes they are listed only 3 of them excluding Marketing one:

SAP Cloud for Sales – mobile, and easy to integrate with whole back office operation sales application. Means traditional and operation CRM related to marketing and field force.

SAP Cloud for Service – support reaching to customer via different channels in real-time to provide customer experience like processing, resolving an issues.

SAP Cloud for Social Engagement (or called Customer Engagement) – tools for connecting/engaging with prospects and customers via different social media.

SAP Cloud for Marketing – covers marketing functions like founds, campaigns, targeting, groups, marketing execution.

From integration point of view there are following connectors for integrating of C4C with other cloud software. This is not final list as there may be others which I just simply not aware of.

More information:

- update on 07/17/2015 -
In short the C4C is next evolution of SAP CRM just pure cloud based. From technology point of view C4C is built on top of SAP Business By Design (BBD) platform.

Monday, January 19, 2015

Something about LIS / LO extraction

LIS stands for Logistics Information Systems. The LIS is information systems and as per its definition it is used to plan, control, and monitor business events at different stages in the decision-making process. In other words it collects the data for several SAP’s logistics applications modules in ECC on background for reporting purposes. Initially when there was no BW all reporting happened in R/3 (called ECC nowadays). Once SAP came with BW they reused LIS also for extracting the data out of R/3 to BW. There are few standard SAP Business Content extractors and for logistics this is it. This is basically why most of BW people know about the LIS because it is related to BW in terms of extraction of data.

The LIS is comprised of the following information systems which all have a modular structure. Number listed below also corresponds to application no from BW point of view. 2LIS_XX_YYYYY where XX stands for application no and YYYY is type of the data being extracted by particular datasource, E.g. 2LIS_02_HDR Purchasing Data (Header Level).

02 - Purchasing Information System
03 - Inventory Controlling
04- Shop Floor Information System, Plant Maintenance Information System
05 - Quality Management Information System
06 - Invoice Verification
08 – Shipment/Transportation
11- Sales Information System (SD Sales)
12 - SD shipping
13 - SD billing
17 - Plant maintenance
18 - Customer service
40 – Retailing
43 - Retail POS Cashier
44 - Retail POS Receipts
45 - Agency business

Over the years SAP realized few drawbacks of LIS and came with extension of it: LO (Logistics extraction). The LO Cockpit is more efficient than LIS. It needs less customizing. Apart of the fact that LO is newer there is no need for maintain LIS infostructures. Because of rework of its functionality it extracts less data volume from source system as reduction of redundancy.
How the data from ECC is being extractor to BW? This is different for LIS and LO. LIS is not used anymore as replaced by LO but I will shorty describe both ways of extraction. Other difference is that while LIS works as push mechanism and the LO implements pull mechanism. The LO is capable if handling bigger data volumes comparing to the LIS.f
While the data is created by business users in ECC (e.g. transactional data) they written to particular logistics application tables (in case of SD: VBAK, VBAP, etc.).

LIS: the data is in parallel to application logistics tables stored to LIS tables. After the LIS is setup up (tcode LBW0 -> Set up LIS environment) there are following tables generated in data dictionary:
Three transparent tables (so called statistics tables) and one structure are generated for particular infostructure. The two tables SnnnBIW1, SnnnBIW2 are used within delta update process. Third table represents an infostructure itself Snnn and the structure SnnnBIWS is used for replication of the datasource into BW.

While data is posted in business transactions the data is also written into statistics tables. These tables were/are used for reporting in the LIS. E.g. LIS table S012 keeps purchasing data (ECC logistics application tables: EKKO/EKPO) here the data is stored in structure optimized for reporting). Same with the tables: S012BIW1 and S012BIW2. From S* tables the data is pulled out of ECC by BW.
For details on how to setup whole LIS extraction refer here.

LO: Again the data is being posted by business users into the logistics application tables. Collection of the data for BW purposes is managed by “LO Data Extraction: Customizing workbench (tcode LBWE). In this tcode it is possible to maintain administration of extract structures. This means that we can add or remove particular field into the extraction structure. While the field is in the structure it is being extracted. Notice that here are only listed the fields that are supported by extractor. So called custom fields are not present here but it is possible to extend the datasource (so called append fields) and add code into in user exit for its extraction.
The LO also uses the concept of so called Setup tables.  The setup tables are used as data storage. Setup tables are used to initialize delta loads and for full load. It is very useful in case that we using ECC system which went live some time in past before BW implementation goes live. Because of that there is already transaction data in the ECC system. By running a job of filling up the setup tables the existing data is captured and send to BW. New data being crated after the setup table’s data were loaded will be loaded by delta mechanism as delta is initialized after the setup tables are populated and the data is loaded.

Transactions to run job for filling up the setup tables:
OLIxBW, where x is app:
OLI1BW         Material Movements data
OLI2BW         Storage Location Stocks for Inventory Controlling
OLI3BW         Purchasing data
OLI4BW         Shop Floor data (PPIS)
OLI7BW         Sales Order data
OLI8BW         Sales Delivery data
OLI9BW         Sales Invoices data
OLIABW        Agency business data
OLIFBW         Manuf. data
OLIIBW         Plant Maintenance data
OLIQBW        Quality Management data
OLISBW        Customer Service
OLIZBW        Sales Invoice Verification
Naming convention for the setup tables is following (where nn is app no.):

Naming convention for extraction structure following:

Naming convention for communication structure following:

Now, coming to delta data part of extraction. While it is collected it is written to Extract Queues (tcodes: SMQ1 or LBWQ). Here so called collector jobs are utilized to copy these data from extract to Delta (or BW) queues (tcode RSA7).

Job collector naming convention (so called V3 job):

In this blog I do not go into the details of different V1, V2, V3 updates. I just refer the one who made it here to blogs listed below – under useful information:
For details on how to setup whole the LO extraction refer here or here.

Other tcode and ABAP reports that are useful to maintain BW extraction of logistics data in ECC:

Clearing the BW Delta Queues for LIS extractors: RSA7
Clearing the Outbound Queue in: LBWQ
Processing of Delta Queues: LBWE
Moving data between extraction and delta queues: reports RMBWV3xx (xx stands for the app no)
Check if the setup tables contains the data: report RMCEXCHK
Delete the data from the setup tables: t-code SBIW, or report RMCSBWSETUPDELETE
Diagnostic of BW Delta queue: report RSC1_DIAGNOSIS gives info on status and condition of the delta queue is issued for a specific DataSource

Useful information:
SMP Component: BW-BCT-LO-LIS Logistics Information System
436393 - Performance improvement for filling the setup tables

Thursday, January 15, 2015

Cloud Appliance Library – CAL

Do you remember what did it take to do simple PoC based on some SAP solution? Usually it is not that easy. While doing it on premise it is pain. Even in private cloud it is not that easy. To solve things related to such a deployment and basically to enable customers to deploy test, training, evaluation or demo systems easily SAP introduced Cloud Appliance Library (CAL) or sometimes called SAP HANA Enterprise Cloud (HEC).

How does it work? With the CAL it is possible to deploy SAP system from preconfigured SAP software appliance. The final system will run in one of cloud (hosting) providers. In most cases the provider is Amazon Web Services (AWS) or Microsoft Azure. Of course one who does the deployment must have an cloud provider account upfront as there is a payment associated for running it on their infrastructure. But the point is to do not lose the time while deploying. An access to the system is via web interface so user needs web browser only.

What systems are available at the CAL:
There are following types of the systems to be deployed available in the library. Standard SAP solutions like Business Suite on HANA, Rapid Deployment Solutions (RDS) products e.g. ERP, CRM; technology components NW: ABAP Application Server.

NB: There is more to say about pricing. Two use cases are here. Trial – only fee for infrastructure to cloud provider exists. Subscription – It is bring in your own license (BYOL) for particular SAP solution type of thing. Plus the license for CAL is applied. Of course fee for using of infrastructure goes to the hosting provider on top of these.

Useful links:

Wednesday, January 14, 2015

SAP Web IDE – what it is?

The SAP Web IDE (or SAPWebIDE) is new development environment for SAP artefacts completely web based to cover end2end development process. It was released in mid of last year (2014).  The WebIDE is continuation of so called River RDE (Rapid Development environment). Basically simple view on the thing suggests that River RDE got renamed to SAPWebIDE.
The development artefacts which cen be developed in the SAPWebIDE are HTML5/UI5/SAPUI5/OpenUI5 applications. Also an extension and customization of SAP Fiori apps can be done in the tool.

Download SAP WebIDE:
Current version of SAPWebIDE is 1.4. it is available both on HANA cloud and also on premise. For the cloud version follow your links as per your region:  EU, AP, NA. Download the SAP WebIDE from SAP Store.

Install SAP WebIDE:
While you have downloaded zip file from a SAP Store (see link above) upzip it to local folder e.g.: C:\SAPWebIDE. Download Eclipse Orion from The Orion is a browser based IDE open source tool IDE platform written in JavaScript. Unzip the Orion into the same folder where WebIDE is located. Download Eclipse Director from Unzip it again into the same folder.

Install SAP WebIDE:
To start the tool up just first open command line and navigate to folder of your chose given in Install part. Go into the Director folder within the command line. Now run Director part with help of command similar to following one depends on your folders:
director -repository jar:file:..///!/ -installIU -destination c:\SAPWebIDE\eclipse

Finally run the Orion part starting up orion.exe from corresponding folder.

After that the SAPWebIDE is accessible by following URL: SAP Web IDE via the URL http://localhost:8080/webide/index.html

Registering user:
In order to use the tool an user needs to be created:

There we go..

Useful links:
WebIDE videos by DJ Adams

Tuesday, January 13, 2015

Vendavo – Price and Margin Management (PMM)

In case of software solution for Price and Margin Management SAP is relaying on company called Vendavo through their alliance since 2005. They offer this solution as reseller of Vendavo. It is officially called SAP Price and Margin Management by Vendavo (SAP PMM by Vendavo).

What basically Vendavo PMM is about? It is solution for creation of customer-specific pricing strategies, tools for optimizing price negotiations, and institutionalize best practices in value-based selling. There are available so called pricing “playbooks” a predefined set of structured analytical steps to help find margin leakage. Also role based dashboards available coming with prebuilt metrics of pricing. Finally there are visualization tools to understand revenue and profit levers.

Interesting part about Vendavo is that they also uses in-memory technology which runs on a traditional database (oracle and DB2). Basically they are using it to support its tool called Vendavo Profit Analyzer. The profit analyzer provides an in-depth analysis of transaction data to highlight opportunities for revenue growth. In 2014 both Vendavo and SAP announced broader relationship where HANA comes into play. The HANA will replace current proprietary in-memory technology used by Vendavo.

More information: -> SAP xApps -> SAP Price and Margin Management by Vendavo
1229778 - Trouble Shooting Vendavo Price and Margin Management

SMP component: XX-PART-PMM Vendavo Price + Margin Mgmt.