Wednesday, July 10, 2019

How to find out backend job name for BPC Package execution

All BPC Packages during they run time they are executed by SAP Job in SAP BW backend system. Sometime it is useful to have a look at these jobs in case some issue occurred and one needs to analyze it.

In case of BPC Package investigation starts in EPM add-in in menu Data Manager -> View Status -> View Package Status. Here in column called SEQ is a generated string that uniquely identifies run time of the BPC Package.


With that, SEQ ID we go to SAP BW backend into table called UJD_STATUS (BPC: Package Status). The value of SEQ ID column we put to LOG_ID (Log-ID of a Process Chain Run) field and as a result, in JOB_NAME field we get the real SAP job name that can be further investigated in e.g. t-code SM37.




Checking status of cube behavior

Real-time cubes in SAP BW means that it is a planning cube where the planning functions can be implemented. This in short means a user’s can enter/modify data. It is possible then to copy, change and do a many different calculations with the data (e.g. distributions based on reference data or forecast functions).
The Real-time cubes can be therefore set into two modes (or called behaviors). Load behavior where the cube can be loaded via regular BW’s transformation or plan (or called real-time) behavior in which the cube can be saving planning data entered by users.

The change or the switch of the two modes can be done either manually:




Or same activity as done manually via RSA1 -> Modeling -> right click on cube -> Planning-Specific Properties -> Change Real-Time Load Behavior can be done programmatically.

Following FM can be used to check the real-time / load behavior and to set it:
RSM_LOADALWD_GET
RSM_LOADALWD_SET       

In addition, there is an ABAP report SAP_CONVERT_NORMAL_TRANS can be used for the same. Similarly, there is a process that can be put into process chain for that.

Table RSMDATASTATE (Status of the data in the Infocubes) and its field LOADALWD (Allow loading for real-time data targets) stores the information into which behavior the cube currently is set to. If the field LOADALWD is set to X it is in loading mode, if it is set to blank in planning mode.

Friday, July 5, 2019

Tips on troubleshooting UD Connect type based source systems connection to SAP BW

There can be multiple issues popping up while connecting SAP BW to UD Connect based source systems like MSSQL. Within this blog post, I list few of them I recently faced.

1. Message No. RSSDK300:
S:RSSDK:300 Cannot convert a value of 'Local' from type java.l ang.String to INT at field CURRENCYTYPE
or
S:RSSDK:300 Cannot convert a value of 'LOCAL' from type java.lang.String to FLOAT at field CURRENCY
Normally error no RSSDK300 can be solved by reactivating corresponding DataSource, then following errors may pop up:
S:RSSDK:300 Field ENTITY is not a member of /BIC/CAZRXD_V_0001 70000001
Message No. RSSDK300
or
S:RSSDK:300 Column: Entity not found    Message No. RSSDK300
To solve this look for SAP Note: 1009063 - UDConnect displays erratic behaviour in loading data and blog. Or as per Note 1040572 – “UDC: Error "Column XXX not found" happens randomly”, use use parameters "fixedSchema" and "fixedCatalog" because the MSSQL system may be using different DB schemas and you may use the table/view that has same name in different schemas.

2 Message No. RSM340
Errors in source system       Message No. RSM340

3 Message No. RSDS_ACCESS036
Error while extracting field list of UD Connect object: UDC ADAPT ERROR::RSSDK|100|while trying to invoke th

4 Message No. RSSDK100
UDC adapter problem: connection area     Message No. RSSDK100

Things that may help to solve above mentioned errors:

JDBC test page
There is JDBC test page available on NetWeaver JAVA stack with following URL:
http://server:port/TestJDBC_Web/TestJDBCPage.jsp
Here one can test connection to particular UD Connect based system. Moreover you can even extract the columns of table or view extraction is retrieving data from. This extremely needed check. It can reveal many issues. For example, in case BW’s datasource is expecting column called ABC but in source system there is completely different column name available like XYZ.

Meta data cache
As UD Connect, based systems are using JAVA NetWeaver server in middle there can be an issues at JAVA server. Mostly these are related to meta data cache. Because metadata description may already be present metadata cache of JAVA server. After the changes done insource system (column name change, column data type/length change, etc.) these changes must be replicated to the JAVA server. As those changes will not automatically be removed from the metadata cache on the JAVA. Normally these can be easily solved by restarting JAVA server when the cache is cleared. However, in production environments it may not be easy to get a time slot to perform the restart. Luckily as of NW version 7.2 the clear cache can be done w/o restart. Procedure is described in SAP Note 1479970 - Changes in RFC modules are not reflected in Java Server. In NetWeaver Admin tool choose tab "Availability and Performance" then select "Resource Monitoring" then "JCo Monitoring", there choose tab "Meta Data Cache". It is also possible to made the cache clear programmatically.

Investigate JAVA logs
Look for following logs to dig out more details about the error:
·        \usr\sap\\\j2ee\cluster\server?\BI_SDK_Trace.log. (there's one log per server node)
·        \usr\sap\\\j2ee\cluster\server?\log\defaultTrace.?.trc (send us the latest default?.trc file from all the server nodes)
·        \usr\sap\\\j2ee\cluster\server0\log\applications\BI\UDI.?.log

Debug on ABAP Stack
Set a breakpoint in FM RSSDK_DATA_UPLOAD_ALL on the line where function RSSDK_DATA_UPLOAD_ALL is called. Also see t-code RSSDK - DB Connect and ABAP program RSSDK_START.

More information:
1722695 - JRA: Object not found in lookup of CallbackConnectionFactory
2000681 - JRA: NullPointerException in CallbackConnectionFactory
1004194 - UDC: How to create log files for problem analysis
512739 - BW external DB Connect for MS SQLServer
1396552 - Remote connection delivers wrong character data

Thursday, May 30, 2019

How to find out code of SAP icon

Codes of SAP icons are used in ABAP report in some cases. For example of here a dynpro having some icons assigned to particular data rows in table grid there can be a logic which evaluates what is value of particular icon. In case one needs to debug it and perhaps change the icon, (means change value of the icon’s code) it is handy to know these codes. Because the icon can only be entered into debugger by providing its value.

Recently I debugged some BW’s UD Connect data source screens and I came across statement that evaluated value of the icon:

Variable icon_bw_datasource declared in Type group called ICON. In this particular type group, there are more than 1200 icons available.

The type group are available from Data Dictionary related t-codes like SE11. 

How to find out when and who ran what APD

Sometime it is needed to find out when some APDs were running, who ran them and so on. In case the APD name is known, it is easy. One just needs to go to APD maintenance part of t-code RSA1. That screen can be also accessed directly via t-code RSANWB. Here just particular APD needs to be displayed and there is a Monitor icon available on its tool bar.

The monitor of the APD has also separate t-code called RSANWB_MONITOR. Report behind this t-code  is called RSAN_PRR_MONITOR_NO_SEL_SCREEN. The report just submits other report RSAN_PRR_MONITOR. Finally, the later report calls FM RSAN_PRR_MONITOR_START where all logic of the APD monitor is developed.

However, what to do in case that APD name that ran is not known. In such a case t-code SLG1 can be leveraged. By supplying string RSANPR into Object field of the SLG1.

SLG1 -> Object = RSANPR

Tuesday, May 21, 2019

How to find out APD that have Performance Settings set to Process Data in Memory

APD processes have an option of processing all data in memory. This means that data of the APD is stored entirety in main memory while processing. It can be set in maintenance screen of the APD there in menu called Goto there is an option Process Data in Memory. Normally this shall be set on for small amounts of data.

In case large volume of data (couple of millions) are being processed like that, execution of such APD may terminate once the main memory no longer has sufficient space for the data.

Now how to find out what APDs are having this flag set on? Data element corresponding to this flag is RSAN_PROCESS_IN_MEMORY_FLAG. However, it is not saved directly in some tables. Main table that stores data about the APDs is RSANT_PROCESS. That one contain column called XML. In this column all settings like filters, mapping etc is stored. Here within the XML column there is a section called PROCESS_DATA_IN_MEMORY. If it is equal to X then it means that the APD is processed in memory.


I created small ABAP program that list out all APDs, which have Performance Settings set to Process Data in Memory. The program is available at my github.


Sunday, May 19, 2019

Switching to BEx Query Designer from SAP BI front end tools

Running BEx Query Designer (QD) directly from BI front-end tool is very common case especially when the QD opened directly the query, which is displayed in the front end tool. Having this function in place means no need to open the QD separately and no need to find the particular query in the QD while it is opened.

BEx Analyzer has this function for a very long time. It is available via menu Business Explorer -> Analyzer -> Choose Add-Ins, and from BEx Analysis Toolbox, choose Tools -> New Query.



In Analysis for Office (AfO) which is supposed to be replacement for BEx Analyzer this function was missing in its earlier versions. However at least in 2.6 the function is there. It needs to be enabled first by customizing the AfO’s UI. This is available via menu Analysis -> Custumize Analysis -> Custumize User Interface. 


On this pop-up window under Ribbon, part there is a Tools section available and under that one “Launch Query Designer” needs to be ticked off.


Finally, the BEx QD is available on AfO's ribbon:




Tuesday, April 30, 2019

How to read/write/delete from/to DSO objects

Sometimes it is needed to store data into DSO objects from ABAP. Either from BW’s transformation or for some reason from regular ABAP programs. Therefore some kind of API needed to be developed by SAP to allow this. There in classing (non BW4H) BW an several DSO types are available: standard/direct update/write optimized. The DSO type is needed to be considered and scenario of the DSO type shall be respected. Normally it makes no sense to write data to write optimized or standard DSO. Regular BW flows shall be used instead. The only DSO type is designed to be updated programically from custom code is Direct Update DSO. That’s why full API is only available for this type of the DSO.

DSO types                   methods of accessing
Standard                     Open SQL SELECT / INSERT or UPDATE / BAPI_ODSO_READ_DATA_UC /SDRD_SEL_DELETION

Direct Update               Open SQL SELECT statement / BAPI_ODSO_READ_DATA_UC / RSDRI_ODSO_INSERT, RSDRI_ODSO_MODIFY_RFC, RSDRI_ODSO_UPDATE_RFC / RSDRI_ODSO_DELETE_RFC

Write-Optimized                 Open SQL SELECT statement /               BAPI_ODSO_READ_DATA_UC

Note that in earlier versions of BW the FM BAPI_ODSO_READ_DATA was available but it is obsolete (as of version NW2004s) and BAPI_ODSO_READ_DATA_UC should be use instead.

RSDRI* FM are also available as non RFC enabled ones.


More information:

Time between SAP application server and server time not in sync

In case time of SAP server and server where SAP runs is not coordinated, there is an ABAP ZDATE_ILLEGAL_LOCTIME dump like below:

Category               Internal Kernel Error
Runtime Errors         ZDATE_ILLEGAL_LOCTIME
Application Component  Not assigned
Date and Time          08.04.2019 14:15:00
Short Text
     The local time on the application server is not correct.

Many SAP applications and functions are working based on premise that time always increase. When the times mentioned are not in sync it may lead to inconsistences of SAP data. These can be e.g.: generated ABAP source, inconsistent data creation by applications based on timestamps. Issue is even more relevant in today’s modern approach when SAP runs on virtual machines where it is hosted by host OS. 

Normally when this situation is observed a particular SAP application server on which the issue occurred must be shut down immediately. Afterwards OS must be checked but no reset of the OS time can be done if the SAP application server is still running. In some cases issue is with Network Time Protocol (NTP) that is responsible to synchronize times in a distributed SAP system among SAP servers and database server.


More information:
2591611 - ZDATE_ILLEGAL_LOCTIME
447839 - ZDATE_ILLEGAL_LOCTIME
2535959 - Dump ZDATE_ILLEGAL_LOCTIME occurs, what should be analyzed

Tuesday, March 26, 2019

BPC add-in error: "wrong CSV Format"

Below error is pretty appearing quiet often while working with EPM add-in in MS Office. It is actually also tricky error as there may be a many things to consider that may cause this error.
Below I introduce few things that needs to be checked as they may have a root cause on this error.


1. Enhancing BPC master data info objects with new dimensions, new data in hierarchies etc. This is especially the case when errors like following can be found in BPC log:

2019-03-26 11:42:33,813|ERROR|Metadata|?.?||||||VSTA_Main| Member [ACCOUNT].[PARENTH13].[ALL ACCOUNTS] is attached to member [ACCOUNT].[PARENTH13].[1,,,,,,,,,,,] on hierarchy PARENTH13, but [ACCOUNT].[PARENTH13].[1,,,,,,,,,,,] doesn't exist# 2019-03-26

Solution: as per SAP Note 1709380, hierarchy first needs to be deleted, dimension processed, data added back to hierarchy and finally dimension to be processed again. 
If this does not help proceed with running of ABAP reports UJXO_CLEAN_DIM_CACHE and UJXO_CLEAN_TDH_DIM_CACHE according notes 2229878 and 2201768
Also, report UJA_REFRESH_DIM_CACHE needs to run for involved dimension as per Note 2269291
If this is specific to TDH dimension see Notes 2767117 and 2303454.


2. Transport related issues. In this case, there is entry in the BPC log corresponding to:

FPMXLClient.Connection.RESTConnection+HierarchyNode doesn't exist

It is caused by transport and according SAP Note 2085650 BPC dimension needs to be retransported.


3. Loading of data from flat files. See SAP Note 2411607 - "Wrong CSV Format" error or missing members in EPM client.


4. Inconsistency caused by /CPMB/A9* objects under 'unassigned nodes'. Proceed accordign Note 1927742.


5. Restore environment related issues. See Note 2162971 how to use report UJXO_CLEAN_DIM_CACHE to fix it.

Friday, March 15, 2019

How to find out how uploaded file to BPC server

EPM add-in of MS Excel offers functionality of uploading files to BPC server. These are so called Data Files of Data Manager functions. It enables to upload files from local user machine to BPC server. Uploaded files can be later used within BPC packages, transformations and so on. 



I was wondering whether there is no information about who uploaded particular file. Within e.g. Data Preview, function there is only information about when was the file modified for last time but no information who modified/uploaded it.


Luckily, there is a table available in BW backend, which holds this information. It is table UJF_DOC. While browsing for table in t-code like SE11 you need to provide environment ID first into APPSET field at selection screen; afterwards put a full path of file in Data Manager into field DOCNAME.

This will return you the user name in field LSTMOD USER that is not available on BPC’s popups in EPM add-in.



Wednesday, March 13, 2019

Managing settings of AfO

Settings or configuration of Analysis for Office (AfO) was done via adjusting an registry settings for AfO of version 1.x in a client machine. However, this was changed and as of version, 2.x the configuration of the AfO tool can be done via maintaining XML configuration files. There are the config files available for Administrator and regular user. There are 2 files available for each of user:

Files for administrator are located in folder: %PROGRAMDATA%\Sap\Cof
        Ao_app.config
Cof_app.config

And for user in folder: %APPDATA%\Sap\Cof
Ao_user_roaming.config
Cof_user_roaming.config

Another way to access the settings is via Excel menu File -> Analysis -> Customize Analysis -> Technical Configuration.


More info:
2083067 - How to maintain settings for Analysis Office 2.x

Sunday, March 10, 2019

EPM-BPC error: Not possible to open report

Normally an EPM add in of MS Excel used as interface to BPC server needs to have a temporary folder with read/write access rights to properly work with the BPC reports. At first place this can be set in settings of the Excel under Save where is a property called “Default personal templates location”. However, in some organization users may not be allowed to change this setting.

In case the setting “Default personal templates location” that points to folder into which user has no write access then any attempt to open report/template/layout from BPC server does not succeed and BPC log (under EPM add-in menu More->Log) raises following message:

ERROR|FilesManagement|FPMXL.Client.OpenSaveServerManager.DownloadedFileFromServer|||||VSTA_Main|Access to path “...” is denied.#



Luckily, there is another option. The Location of default folder for save/open of local data can be changed via User Option in EPM add-in under Server Configuration there is a setting called “Default Folder for Local Open/Save”. Once this local settings point to any folder where write access is granted user it is possible to open report/template/layout from BPC server w/o any issues.




Thursday, February 28, 2019

Selective deletion logs – reviewed outside RSA1

Selective deletion of data in InfoProvider can be performed on manage screen in RSA1. There is a tab called Content and button Delete Selection that allows this activity. This button displays a pop up window that contains several other button for particular activities.




Main function of the popup is behind Deletion Selection that allows entering selection for the deletion. Once this is done, a job can be setup that will perform physical deletion.

Another button of the popup that I want to focus on is called a Log. This displays an ALV grid display of all selective deletions done on particular InfoProv in past. The grid can be further drilled down to details of each deletion.  Thsi shows all condition/selection that was used for deletion. 

Sometimes I want to evaluate deletion logs outside the RSA1 t-code. In order to this there is a Function Module called RSDRD_LIST_ACTION_LOG. Via the FM for which just InfoProv needs to be provided as input parameter the same ALV grid with the all selective deletions done on particular InfoProv in past is displayed.

The FM has another input parameter that is optional called I_WITH_AGGREGATES. This enables to display aggregate deletions at InfoProv as well.

Table which stored deletion logs is called RSDRDLOGHEADER. This si a header table with info on who, when, which InfoProv, Delete Mode and number of records that were deleted.

Detail information on which fields were used for the deletions is stored in table RSDRDLOGPOSITION. Key which links both tables is POSITION_ID.


Monday, February 18, 2019

Loading large flat files via SAP GUI into BW

Recently I needed to upload a large flat file into BW object. A size of the file was above 100MB. I was using classical data flow for uploads of the flat files. Means the file was uploaded via infopackage to PSA. From there there was a DTP to upload it further.

When I ran the upload to the PSA I got following error in the upload monitor:

Not enough memory for Data Provider
Message No. FES013

SAP GUI file functions have a limitations on the size of file to be uploaded / downloaded. That was clear to me. However my impression was that the issue can occur with the files a way too larger than just 100MB. However when I did a split of the file into smaller chunks I realized that ideal file size that work within any issues is below 20MB. Which is quite small portion of data. I used SAP GUI version 740 which is quite new. Therefore this finding was a surprise to me. 

Within SAP Note 875871 they mention that this limit is tied to user workstation where SAP GUI is running. They specifically mention that issues are present when there is only 1 or 2 GB of RAM available at user workstation. This wasn’t really my case as I worked with laptop having 8GB of RAM.

All in all it is quite surprising that in today’s age of big data there is such issue. Only solution is to either do split of files to smaller chunks (ideally below 20MB) or upload the whole file into application server of BW and upload it from there.


More information:
875871 - DP: Memory bottleneck when you upload very large files

Monday, January 21, 2019

SAP IDES SAP ERP 6.0 EhP8 installation

Here is my latest experience with installing IDES system. I installed the latest available IDES system to date which is ERP 6.0 EhP8 aka SAP Business Suite 7i 2016. Installation is WINDOWS OS based on MS SQL as database. VirtualBox was used as virtualization environment.


1. OS installation:
I used Windows Server 2012 R2 as an OS. After the OS was installed I did following setup:
1.1 install Guest Additions in VirtualBox to enable folder sharing: in VM go to disk D - VirtualBox Guest Additions and run installer (e.g. C:\Program Files\VirtualBox\VBoxGuestAdditions.iso)
1.2 change host name of VM to 6 char only
1.3 enable .NET 3.5.1 manually for the Windows Server via Service Manager -> Local Server -> Manage -> Add Roles and Features Wizard -> Features
1.4 modifying host file with following entry:
#SAP ECC 6 EhP8
127.0.0.1      ides ides.dummy.nodomain
1.5 Switch off firewall in Windows from PowerShell in elevated mode by entering the following command:
Set-NetFirewallProfile -enabled false
1.6 make sure you have at least 350GB available space on hard drive where the system will be installed.


2. DB installation:
From installation medias choose one marked as SQL Server 2014 for SAP and run installation via file SQL4SAP.bat
Script takes approximately 20 minutes to complete.


3. SAP system installation:
3.1 Preparation check
Run SAP installer from command line located at media marked as SWPM. I used SWPM of version SP24 (Software Provisioning Manager 1.0 SP 24). The SAP installed itself is executable file sapinst.exe. The SWPM used to be regular Win app not web based before. Not sure if the web app is the best but it looks working for me.


The installer open default web browser and points to URL: https://:port/sapinst/docs/index.html

which is UI of the installer. Here I choose SAP Business Suite 7i 2016 -> EHP8 for SAP ERP 6.0 ABAP -> MS SQL Server -> Preparations -> Prerequisites Check.

On next screen there is a possibility to choose what shall be checked. Safe option is to leave it as it is to run just default checks.

Next screen is to browse for installation media of Kernel.

Once the path to the Kernel is checked it open the files in there and checks them. After the successful checks a Status is set to available. If it is not available you need to provide path to proper Kernel installation media.

On next screen the installer provides results of Prerequisites checks.

Some things listed I just ignored and went further by selecting No button to do not check them again.

Almost final screen of prerequisite check screen.

And finally the final screen of prerequisite check screen.



3.2 Installation it self

Start sapinst.exe again for next phase. From very first screen from installation UI based in web browser select: SAP Business Suite 7i 2016 -> EHP8 for SAP ERP 6.0 ABAP -> MS SQL Server -> Installation -> Application Server ABAP -> Standard System -> Standard System.

Here I said Custom type of the installation:

Restart of OS due to user switch:


Installation resumes automatically afterwards.

Specifying SID for the SAP system:

Specifying domain:

Specifying master password:

 Settings related to WIN domain:

Specifying passwords for SAP users on OS level (by default passwords are copied from master one):

Specifying connection to the database:

Confirmation of and DB creation:

Browse for installation media. Starting with SAP Kernel.

Once the Kernel media are okay screen looks like this:

Again run of prerequisite checks:

Announcement that not all pre-req checks were successful. This can be ignored by opting to do not repeat checks.

Software package browser this time for SAP Host Agent component:

More options for SAP Host Agent related to Windows Domain:

Passwords of user under which of SAP Host Agent will run:

Specifying path to media for Data export CD1:

Specifying path to media for Data export CD2:

Specifying passwords to the database:


Specifying size of system. Off course I went with the smallest option:

Location of the database file can be adjusted on this screen:

Database temporary DB files configuration:

SQL Server memory config related settings:

Declustering / decoupling options. As this is NetWeaver 75 based system this is mandatory – it can’t be really skipped. More on this topic can be found here.


This operation takes huge amount of time during installation. All cluster and pool tables are being reorganized.

By settings related to DB import you can speed up installation if there is a powerful hardware available. 

Primary Application Server Instance and ABAP Central Services Instance, I just left it as it came up by default.

ABAP Message Server Ports and Transport Host related settings.

ICM User Management for the SAP Web Dispatcher related settings.
 SLD Destination for the SAP System OS Level related settings. 

Message Server Access Control List related settings, not applicable in my case.

Additional Components to be included in the ASCS Instance

SAP Web Dispatcher Parameters

Secure Storage Key Generation

Generated secure key (needs to backup):

Review parameter list:

Installation now runs:

Importing data to database phase:



Most of installation time was spent on declustering / decoupling operations. In my case it was almost 24+ hours. That runtime I had with intel i7 processer on just 1 core and 16GB of RAM.
Progress of import DB phase can be seen in log file located in:

"c:\Program Files\sapinst_instdir\BS2016\ERP608\MSS\INSTALL\STD\ABAP\import_monitor.log"

When progress reached Service Completion at 57% there was following error that can be ignored:

Package 'SAPVER_CLUSTR' not loaded.
Message
Not all packages are loaded.
DIAGNOSIS: For details see output file with missing packages invalid_packages.txt and log file package_checker.log.

Normally it indicates the data load error but in some special cases (for example, if some packages were processed externally) you can choose OK to continue.


Afterwards install continues with re-writing phase:

Running of procedures phase:

File actualization:

Attempt to start SAP instances:

During start of SAP instances phase I got following error:

An error occurred while processing option SAP Business Suite 7i 2016 > EHP8 for SAP ERP 6.0 ABAP > MS SQL Server > Installation > Application Server ABAP > Standard System > Standard System (Last error reported by the step: ABAP processes of instance PCT/D00 [ABAP: STARTING] did not start after 10:10 minutes. Giving up). You can now:
• Choose Retry to repeat the current step.
• Choose Log Files to get more information about the error.
• Stop the option and continue later.
Log files are written to C:/Program Files/sapinst_instdir/BS2016/ERP608/MSS/INSTALL/STD/ABAP.

It turned out that issue was related to paging file of my WINDOWS OS. I had to manually increase the file if the page file via - Click right to This PC -> Properties -> Advanced System Settings -> Advanced -> Performance -> Settings -> Advanced -> tab Advanced -> Virtual Memory. Change page file size to minimum 10240 and maximum 20480.

In case you are stuck in this step with similar error, here’s’ what you can try:
- Check logs files like dev_w0 and dev_disp.

- Kill all other programs running in parallel to your SAP installation. The install may need all computer resources to complete successfully.

- Try to start the instance manually and proceed with the installation.


- See SAP Note 2535340 - ABAP processes of instance [ABAP: UNKNOWN] did not start after 10:10 minutes. with 28: No space left on device in dispatcher trace file while installing a additional dialog instance.


Analyzing of log files ended up with following error:

An error occurred while processing option SAP Business Suite 7i 2016 > EHP8 for SAP ERP 6.0 ABAP > MS SQL Server > Installation > Application Server ABAP > Standard System > Standard System(Last error reported by the step: The JVM reports an exception during execution of class ( com.sap.sdt.ins.component.nw_ci_instance_abap_reports.RunRUTCSADAPT ) and function executeStepClass. DETAILS: The reported error message from JVM is: java.lang.Exception: Immediate start not currently possible at com.sap.sdt.ins.nw.batchjob.BatchJob.start (BatchJob.java:212) at com.sap.sdt.ins.nw.batchjob.ABAPProgramExecutor.startJob(ABAPProgramExecutor.java:224) at com.sap.sdt.ins.nw.batchjob.ABAPProgramExecutor.executeProgramAsBatchJob(ABAPProgramExecutor.java:179) at com.sap.sdt.ins.nw.batchjob.ABAPProgramExecutor.execute(ABAPProgramExecutor.java:116) at com.sap.sdt.ins.component.nw_ci_instance_abap_reports.RunRUTCSADAPT.execute(RunRUTCSADAPT.java:25) ). You can now:
• Choose Retry
to repeat the current step.
• Choose Log Files
to get more information about the error.
• Stop the option and continue later.
Log files are written to C:/Program Files/sapinst_instdir/BS2016/ERP608/MSS/INSTALL/STD/ABAP.

To solve this error again (as I did in case of instance not started error) I just tried to cleanup up memory as much as possible by removing not needed programs. And retrying to execute the step.


Execution of report RUTTTYPSET – this step is also time consuming due to heavy memory requirements. It might be that this reports is scheduled in the system in many parallel instances and they causing system overload. Purpose of the report is to correct some entries in tables.

Final message – installation is successfully over!

And very last message:

In closing words I’d like to say following.

Reason why SAP moved SWPM to be pure web app is not very clear to me. There are normally issues like: timing out of web page when the installation progress is not refreshed, crashing of web browser, consuming too much memory by the web browser that can’t be used for installation etc. Probably to have it web based is helping SAP to deploy cloud based solutions however to have a thick client would serve better for on-premise installs.

Necessity of performing declustering / decoupling of DB tables. This operation is now mandatory as of NetWeaver 7.5. I understand that this is needed in case of HANA DB. By I do not see that much benefit of it in case of other DBs. I’d prefer to have this optional and do not enforce it by the installation. The operation itself very time consuming and overall installation time ballooned just because of that.

Hard drive space is needed for MSSQL based IDES system to be allocated at least for 350GB. Even better is to have approximately 380GB. Otherwise the installation will crash and in many cases you need to start over with it.

This is also one of my last installation of IDES systems. As SAP is moving everything to could and to HANA DB there won’t be a lot of chances to do this again in future. This is mostly by decreasing demand. It is much easier to deploy it in cloud. Also IDES on HANA DB is much difficult to setup as HANA needs to be installed first and that is also not an easy task to complete.