Quantcast
Channel: Software Logistics
Viewing all 112 articles
Browse latest View live

SGEN: FAQ

$
0
0

Objective:

 

This FAQ provide answers to commonly asked questions regarding SGEN. We have gathered few commonly asked questions about SGEN from our customers.  Please review the information below:

 

What is SGEN?

 

SGEN is the transaction to generate ABAP loads for large numbers of programs, function groups, module pools, and so on as well as Business Server Page applications (BSP applications).

 

SAP Note 379918 contains report name and table name which are used during SGEN execution. Report RSPARAGENLOD, RSPARAGENJOB and  RSPARAGENER8 are replaced by RSPARAGENLODM, RSPARAGENJOBM and RSPARAGENER8M respectively in the latest version. table GENSETC is replaced by GENSETM.

 

SGEN execution consists two parts. The first part determines the so-called generation set through the options given in the first screen. generation set is stored in table GENSETC or GENSETM. The second part is compiling source code into load. this part is triggered by  background job RSPARAGENER8 or RSPARAGENER8M.

 

Where can I find more information about SGEN?


You can go to SGEN transaction and click the pushbutton Information. there is detail explanation about how to use the transaction. You can also go to SAP online help and search SGEN.


When should I run SGEN?


SGEN is designed for large-scale program generations. You can consider to use SGEN in following scenarios. SAP Note 438038 and 162991 explains other generation tools as well.

 

How to improve SGEN performance?


Compared against the calculation of SGEN generation set, the second part of SGEN takes longer time. However, the second part is out of control of SGEN. This is SAP system related and depends on resources of the system itself like main memory, number of CPU  and last but not least, the database performance. The runtime of the generation jobs is heavily depending on the database performance of the system. If you noticed that generation job long time, you can following  below:

 

  1. Make sure there is enough system resource (CPU, memory) and free background  work processes. SAP Note 1651645 describes a known issue.
  2. Make sure that database performance is fine. Please involve your database expert if necessary.

 

How to handle generation system error?

 

You need to analyze SM37(Job name:RSPARAGENER8 or RSPARAGENER8M), SM21, ST22 and ST11.


How much free space is required in database for SGEN execution?


If you want to regenerate loads, make sure that there is enough space available in the database. The space required can be several hundred MB. Generation over all components requires around 2 GB of free space.

 

See Also:

 

162991 - Generation tools for ABAP programs
379918 - Redesign of the SGEN transaction
413912 - Shorter runtime for specifying generating quantity
438038 - Automatic regeneration of invalidated loads
589124 - Performance improvements when Support Package imported
1132507 - SGEN: Using maximum number of free work processes
1147789 - SGEN does not generate all loads after release upgrade
1230076 - Generation of ABAP loads: Tips for the analysis
1630356 - SGEN terminates after 500 generation errors
1645864 - SGEN generation errors do not cause an error message
1651645 - Maximum number of SGEN processes cannot be greater than 9
1869363 - SGEN: Correction for selection of WebDynpros / BSPs

 

Note:

For more information please refer to SAP Note 1989778 - FAQ: SGEN

 

Please leave any feedback in the comments section below. You can also post any questions in the SL discussion forums


James Wong

SAP Topic Communicator


Using SWPM and Oracle’s PL/SQL-Splitter for migrations towards SAP HANA

$
0
0

To improve the performance during export and also the duration of the table splitting itself, Oracle offers a splitting mechanism which is quite fast. The so-called PL/SQL-Splitter uses the ROWIDs of a table to calculate the WHERE-clauses used by the exporting R3load.

As a ROWID represents the address of a record in a table there is no guarantee that it will be the same in the source and the target database.

Due to this the ROWID can only be used to export the data using multiple R3loads in parallel. But R3load uses the WHERE-clause also on the import side in case it is restarted after an error. In this case R3load adds the WHERE-clause to the delete statement to only delete its special subset of data for this table.

That means if a table is imported into SAP HANA in parallel and the WHERE-clauses are based on ROWIDs, the whole table needs to be truncated and loaded again in case one R3load fails. But due to the performance improvement on the export side it might be worth the effort.

 

This blog post describes a method how to use this splitting mechanism together with the software provisioning manager. The version should be at least SWPM 1.0 SP7 PL4. The scenario needs to be well tested before it is used for a productive migration. Also the case of an error should be tested to get used to the procedure.

 

Creating the structure of the export DVD

First things first. An empty export medium or export DVD needs to be created. As the source database is Oracle you have to choose Oracle in the Product Catalog.

CreateDVD.PNG

The target database type of course will be SAP HANA. The rest of the routine is pretty straight forward.

 

This example assumes that not all tables which are about to be splitted will be splitted with the PL/SQL-Splitter. That means we have to call the table splitting procedure of the software provisioning manager twice. In this case it is important that the PL/SQL-Splitter is called first.

 

Splitting by ROWID

More information about the PL/SQL-Splitter can be found in OSS-note 1043380 Efficient Table Splitting for Oracle Databases

RowIdSplitter_1.PNG

To get to the PL/SQL-Splitter you have to choose Oracle as target database type.

 

RowIdSplitter_2.PNG

 

After the procedure has finished successfully you will find the result on the export DVD. Now if there are other tables which shall be splitted with the standard tools you have to call the software provisioning manager again. The new WHERE files will be added to the existing ones.

 

Splitting with standard tools

R3ta_1.PNG

This time you have to add SAP HANA Database as target database type. Otherwise the existing WHERE files will be deleted.

 

R3ta_2.PNG

After all splitting is done you should have WHERE files for all your tables and also the file whr.txt needs to contain the names of all splitted tables.

 

Export of the source database

The export of the source database is the same as for all migrations towards SAP HANA. You can use all possible options.

Database_export.PNG

Installation of the target system

On the target side we have to do a few manual steps.

Target_system_1.PNG

The new option 'Omit tables from import' offers the possibility to exclude tables and packages from the import. We use this feature to exclude the tables which were splitted by the PL/SQL-Splitter. We will import them manually.

Target_system_2.PNG

In this example I have marked the checkbox "Create indexes afterwards". This is not necessary. It was just the default.

Target_system_3.PNG

Once the import is running we can simply grep the commandline from the running processes. We can see that the migration monitor is called with a few additional options:

  • -omit V
    That means the views will not be created in this run. They will be created when all tables are imported. Otherwise we might run into errors when the excluded tables are not ready.
  • -packageFilter
    This file contains the tables or packages which are excluded.
  • -filterUsage omit
    That means the tables in the file are omitted.
  • -onlyProcessOrderBy true
    Only the tables and packages in the file order_by.txt are processed by the migration monitor not all packages which are on the export DVD. Due to this the excluded tables won't be processed here.

 

Now it is time for some manual action.

 

Importing the excluded tables

As the import is done by sidadm we have to switch to that user to execute the following commands.

 

  • > su - trgadm
  • > mkdir import_RowID_Tables
    > cd import_RowID_Tables
    This will be the installation directory for the manual import.
  • The following files and directories need to be copied into the new directory.
    > cp -r ../importSWPM/sapjvm .
    > cp ../importSWPM/import_monitor_cmd.properties .
    > cp ../importSWPM/migmonctrl_cmd.properties  .
    > cp ../importSWPM/DDLHDB.TPL .
    > cp ../importSWPM/ngdbc.jar .

 

After copying the files we have to adapt the properties file of the migration monitor to the new directory. I use '.' instead of the absolute path.

import_monitor_cmd_properties.PNG

  • Now we have to extract the migration tools into the directory to be able to import the tables by hand.
    > SAPCAR -xvf <IM_LINUX_X86_64>/COMMON/INSTALL/MIGMON.SAR dyn_control_import_monitor.sh migmon.jar migmonctrl.jar
  • Now we can already start to import the excluded tables. As we have chosen to create the indexes later we have to omit them together with the Views during this run (-omit PIV). We will also use the same package filter but the usage is now exclusive.

    > setenv JAVA_HOME ./sapjvm/sapjvm_6
    > setenv HDB_MASSIMPORT YES
    > ./dyn_control_import_monitor.sh -sapinst -host ls9291 -instanceNumber 00 -username SAPTRG -password <Password> -databaseName OQ1
    -omit PIV -packageFilter /sapdb/exportQO1/RowID_Tables.txt -filterUsage exclusive -onlyProcessOrderBy true

Running_migmon.PNG

Now, what do we have to do if one of these jobs run into an error ? Keep in mind the WHERE clauses cannot be used. That means the whole table has to be truncated. But also the R3load tasks and the import state of the migration monitor have to be adjusted properly.

Error_1.PNG

In this example two jobs for table REPOSRC have failed. That means we have to load the whole table again. This also means that all R3load jobs for this table need to be in status finished or failed before we can truncate the table and reset the status of all import jobs.

Error_2.PNG

In this example the different jobs for table REPOSRC have all kind of status:

  • REPOSRC-1, REPOSRC-10 and REPOSRC-post
    Status 0 means they are not yet processed.
  • REPOSRC-2, REPOSRC-4, REPOSRC-6
    Status ? means they are in process right now.
  • REPOSRC-3, REPOSRC-5, REPOSRC-8
    Status - means an error occured.

 

The first thing now is to prevent all jobs with status 0 from being executed by the migration monitor:

 

sed -i "/REPOSRC/s/=0/=+/g" import_state.properties

 

Now all unprocessed jobs for table REPOSRC have status '+', which means they won't be processed by the migration monitor.

The next step is to kill all running R3load jobs for this table. Don't use kill -9 to give the process the chance to end itself properly. After that all jobs for this table should have the status - (failed) or + (ok).


The next step is to truncate the table and to delete all TSK files which belong to this table.

sidadm> hdbsql -U DEFAULT truncate table REPOSRC

sidadm> rm REPOSRC-*__TPIVMU.TSK*

 

Now we have to reschedule the jobs:

 

sed -i "/REPOSRC-post/s/=+/=0/g" import_state.properties

sed -i "/REPOSRC-[^p]/s/=[-,+]/=0/g" import_state.properties

Error_3.PNG

 

In case you don't want to do this while the migration monitor is running because it is still working on other tables. You might want to create one installation directory for each table you want to import this way. By doing this, each table will has its own migration monitor which will abort once all jobs for this table have failed.


After the jobs are rescheduled they hopefully will finish successfully. Then we can switch back to the software provisioning manager.

 

Swpm_rest_1.PNG

From the pop up we see that all the other jobs have already finished. Once we have accomplished to import our excluded tables we can press ok. Now the Indexes for table REPOSRC and T100 and finally the Views will be created.

Swpm_rest_2.PNG

After the import the package checker is executed. It compares the packages on the export DVD with the logfiles in the directory of the software provisioning manager to make sure everything is imported. Now as we have imported some tables in different directories we will get a message saying that some packages might not have been loaded.

PackageChecker.PNG

But we know that everything is fine so we press the OK button. Now the installation should cruise to an successful end.

[CISI] SUM phase APPLY_SYSINFO_CHK stops with a severe error

$
0
0

Hi guys,

 

the current SUM SP12 patch level 8 contains a bug which forces the CISI process to stop.

 

In phase SYIDUP_CHECK/APPLY_SYSINFO_CHK you will see below error in logfile APPLSYSINFOCHK.LOG:

 

1 ETQ201 Entering upgrade-phase "SYIDUP_CHECK/APPLY_SYSINFO_CHK" ("20150227124653")
2 ETQ367 Connect variables are set for standard instance access
4 ETQ399 System-nr = '00', GwService = 'sapgw00' Client = '000'
1 ETQ200 Executing actual phase 'SYIDUP_CHECK/APPLY_SYSINFO_CHK'.
1 ETQ399 Phase arguments:
2 ETQ399 Arg[0] = 'APPLSYSINFOCHK.$(SAPSID)'
2 ETQ399 Arg[1] = 'CHECK'

[...]

2 ETQ232 RFC Login succeeded
4 ETQ010 Date & Time: 20150227124654
1 ETQ233 Calling function module "OCS_API_APPLY_SYSINFO_XML" by RFC
2 ETQ373 parameter "IV_STACK_FILE" = "/Install/stack_for_cisi.xml"
2 ETQ373 parameter "IV_LOGFILE" = "APPLSYSINFOCHK.R82"
2 ETQ373 parameter "IV_LOG_DIRTYPE" = "T"
2 ETQ373 parameter "IV_CHECK_ONLY" = "X"
1 ETQ234 Call of function module "OCS_API_APPLY_SYSINFO_XML" by RFC succeeded
4 ETQ010 Date & Time: 20150227124654
2 ETQ399 Table ET_CVERS (#127):
3 ETQ399 "BP-CANW","740V4","0000",""
3 ETQ399 "BP-ERP","617V2","0000",""
3 ETQ399 "BP-SOLBLD","70V9","0000",""
3 ETQ399 "EA-APPL","617","0000",""
3 ETQ399 "EA-DFPS","600","0000",""
3 ETQ399 "EA-FIN","617","0000",""
3 ETQ399 "EA-FINSERV","600","0000",""
3 ETQ399 "EA-GLTRADE","600","0000",""

[...]

3 ETQ399 "WEBCUIF","747","0000",""
2 ETQ374 parameter "EV_RC" = "0"
2 ETQ373 parameter "EV_MESSAGE" = ""
1EETQ399 Last error code set is:
1EETQ204 Upgrade phase "APPLY_SYSINFO_CHK" aborted with severe errors ("20150227124654")

 


The bug will be fixed in the next SUM version. So please reset the Software Update Manager as explained at Resetting an update - Technology Troubleshooting Guide - SCN Wiki and wait for the next release. This will be published soon. Unfortunately there is no workaround to overcome that phase.
An official SAP note (#2136279) has been published some minutes ago.

 

 

+++ Update from March 3+++

 

Patch Level 10 will contain the fix. The patch should be released by today.

 

+++ Update from March 3 +++

 

 

 

Best regards,
Andreas

Supportability Tool for Transport Problems

$
0
0
  • SAP Change and Transport System (BC-CTS) is a unique tool of SAP to enable customers to organize development projects in the ABAP Workbench and in Customizing, and then transport the changes between the SAP Systems in their system landscape.

 

  • SAP’s CTS tool is already very mature, but we still receive high number of customer incidents due to complexity of customer’s SAP system landscape as well as their frequent development activities or change management activities.

 

  • We therefore developed a supportability tool to enable customers to better monitor their transport activities and progress, analyse the issues and solve those issues by themselves. This tool is included in KBA 2126899 together with a video and a user guide in the pdf document

 

  • The tool allows a user to analyse all transport related activity in one transaction.

 

  • It allows a user to input a time frame and the tool will return all transports that were exported and imported to selected system (provided it share transport directory with the system you are running the tool from) during that period.

 

  • It will highlight the return code for each phase of the transport and allow the user to view the detailed log file from the transport directory. This will allow the user to troubleshoot any issues presented.

 

  • The tool will have the ability to take the error message(s) from the log file(s) and search for SAP notes / KBAs to find a solution

 

  • The user has the ability to view important DB tables related to the Transport Management System.

 

  • The tool can read the transport buffer file on OS level and give an overview of the status including buffer snapshots to facilitate troubleshooting.

 

  • The user can view the password rules that would affect TMS user (‘TMSADM’) directly from the tool.

  • Direct access to work directory and system log with the ability to highlight errors and search for a solution.

  • Direct access to the TMS QA tables to analyse Quality Assurance issues.

 

  • Finally there is an action which highlights the performance times of the transports (by displaying  the runtime of each transport phase) highlighting the areas of poor performance so we can address those.

SUM Upgrades Configuration : Tuning/Process Counts

$
0
0

Configuration Screen of SUM : Parameters for procedure.

Process_.PNG

 

Ok, So the Problem was on the Process count to use for our updates and upgrades. I turned to many fellows in my touch, All of them had some calculations of random andarbitrary nature. I was not convinced. We had to have something concrete for each and every process type which could be used for direct calculations. Tried multipleruns on a Sandbox, Referred to many notes, Initiated an OSS Incident, and also turned to inhouse experts of Database and SAP.

 

The final results provided reason for the random arbitrary nature of the view taken by my colleagues. You can't have something conclusive like (Number of CPUs X 1.3 = R3trans processes to use), although a lot of industry veterans do so. What one can do is fall into the 'Thought process' of researching, tuning, observing, andtesting.

 

One of things that I found myself in great need of but missing was a good SCN blog on the topic. There were tidbits here and there, but hardly any good guidance.

 

The reason I initiate this blog and discussion is just that : To get thoughts from any and all, so the end page is an ever evolving starting point for everyone at the above screen of SUM for their respective SP/EHP/Release Upgrade.

 

Lets discuss process by process the thought process I used:

 

1. ABAP Processes :

 

Pretty Straightforward. Configure according to BGD processes available in the main system. Make sure enough is left for the Normal jobs users have to run. For downtime, you can use the maximum available. As per the SUM Guide, the returns stagnate after a value of 8. So below is what I used for system with 10 BGD available:

 

UPTIME : 6
DOWNTIME : 10

 

Could have increased the BGD in the system, but since the value above 8 should not have had much impact, above counts seemed optimal to me.


2. SQL Processes :

 

This part looks simple, but was the trickiest for me. Appropriately sizing this can do good for DBCLONE and PARCONV_UPG Phases. But size too large and you may experience frequent deadlocks in various phases, logging space full errors, archiver stuck, or performance severely impacted.

 

The Problem in my case, when using nZDM with very high SQL count was - "Transaction Log is full" - DB2 Database running out of the logging space. If you are working with a database like DB2 - where you have "Active logging space" constrained by DB parameters, make sure to size this process count small - Too many parallel SQL statements and logging space will fill up quick resulting in the aforementioned error which can only be bypassed by decreasing the count. To unthrottle, increase logging space or Primary/Secondary logs. Also, the log archiving has to be fast with plenty of buffer space in the archive directory.

 

As for the count, if one can take care of logging space and log archives, the next step is CPU. Different databases may slightly differ when dealing with execution of SQL in parallel. But core concept remains the same. More CPUs Help. Once you have a number, like 8 cores in my example, You next need to finalize the degree of parallelism (DOP - Oracle Term) - The number of parallel threads each CPU will be executing. For example, if 16 SQL Processes would have been used in my case - 2 threads would be executing per CPU - A choice I didn't took as I wanted minimal impact on the productive operation of the system during the uptime phases.

 

Referring to the standard documentation of Oracle and DB2 databases - what I noticed was that the default and recommended DOP is 1-2 times the number of online CPUs. Also, the return is stagnated after increase to a particular number, after which the negative effects (Performance deterioration) increase as usual but returns are minimal.

 

After increasing the logging space, taking enough archiving space directory, following is the number I used for 8 CPUs.

 

UPTIME : 8 (DOP=1)
DOWNTIME : 12 (DOP = 1.5) Will make this 16 in the next system.

 

DBCLONE done in couple hours with above - Good for me.

 

4. R3trans Processes :

 

So the big one now. This process count has the biggest impact. TABIM_UPG, SHADOW_IMPORTS, DDIC_UPG - Phases with biggest contribution to runtime/downtime - go faster or slower based on how much this is tuned. The below KBA is the first step to understand how tp forks these during imports. There is a parameter "Mainimp_Proc" which is used in the backend to control the number of packages imported in parallel and the below KBA explains just that - The entire concept.

 

1616401 - Understanding parallelism during the Upgrades, EhPs and Support Packages implementations
1945399 - performance analysis for SHADOW_IMPORT_INC and TABIM_UPG phase

 

Now, how to tune it. This was one the most confusing ones. There are notes which say to keep it equal to number of CPUs (Refer Above notes - They say this). The SUM Guide seems to love the value of 8 (The Value larger than 8 does not usually decrease the runtime <sic>). You also have to keep in note the memory. A 512 MB of RAM per R3trans Process seems a good guideline. The end result for me was the same process count as SQL Processes :

 

UPTIME : 8
DOWNTIME : 12

 

One other thing still left unexplored, but next on my radar, is playing with "Mainimp_Proc". The below link talks about changing that using parameter file

TABIMUPG.TPP. Since this controls the number of TP Processes, tuning this should be done after results from one system. Readings there in the logs can help here.


http://wiki.scn.sap.com/wiki/display/ERP6/Performance+during+upgrade+phase+TABIMUPG


5. R3Load processes :

 

For EHP Update/SPS Update, I dont think this plays any part. From what I understood, this is relevant majorly to the Release Upgrade. Anyways, this one was a bummer. I didn't seem to find any helpful documentation on R3load relevant for the upgrades specifically . However, Communicating with SAP over an OSS. The below guideline was received and used :

 

"There is no direct way to determine the optimal number ofprocesses. A rule of thumb though is to use 3 times the number of available CPUs." The Count I used:

 

UPTIME : 12
DOWNTIME : 24

 

But anyone from the Community can answer and increase my understanding : Which phases use this in upgrades, if any?

 

6. Parallel Phases :

 

Another one of random nature with Scarce details. This one talks about the number of SUM sub-phases which SAPUp canbe allowed to execute in Parallel. Again, had to refer to SAP via OSS Incident for the same.

 

"The phases that can run in parallel will be dependent on upgrade/update that you will be performing and there is no set way tocalculate what the optimum number would be." Recommendation was to use default and that is what I did.

 

UPTIME : Leave default (Default for "Standard" mode - 3, Default for "Advanced" mode - 6)
DOWNTIME : Leave default (Default for "Standard" mode - 3, Default for "Advanced" mode - 6)

Transport of HANA objects with HALM

$
0
0

I hope you already heard about HALM XS application. The HALM stands for HANA Application Lifecycle Management and the application comes with any HANA installation starting since SP6 (auto-content HANA_XS_LM DU). In short, the tool helps you to develop and transport HANA applications. You can manage products, delivery units and packages, but in this blog I would like to describe only transport capabilities and details provided by HALM. Main key transport features were already available in the first version of HALM released with HANA SP6. The HALM SP7 was already able to transport also released objects and with the HALP SP8 you can transport your changes not only between two systems, but also integrate the Change and Transport System (CTS) to transport to multiple systems. Main focus in this blog is transport of HANA repository objects with native HALM. The HALM can be accessible as http(s)://<host:port>/sap/hana/xs/lm


Transport of active objects


Let’s say, you develop your simple XS application. You create a package and in the package you create your repository objects and activate them. For example:

Pic1.jpg

After you checked that your application works, you would like to transport it to another system. To do so, you need to be sure that all the packages where your objects are located are assigned to a Delivery Unit (DU). You create a Delivery Unit and assign the packages you would like to transport to it. Something like: DEMO_DU (demo/aaa).

To transport this application via HALM is pretty easy. I just would like to shortly remind that HALM transports are supposed to be PULL transports. In the target HANA system you create transport routes from one or many source HANA systems. For each transport route you can assign one or many delivery units existing in the source system. They (DUs) don’t have to exist prior in a target. The list of delivery units available for transport is defined by a source system.

Pic2.jpg

So, the first thing to be done for organizing HALM transports is to register in a target system a source system where the delivery units should be transported from. The details how to do that can be found in the official documentation. (Developer Guide, Chapter 12.6). Having a transport route defined, it can be (re-)used for subsequent transports of the assigned Delivery Unit(s). A transport executes 3 main steps: export of the assigned delivery units in the source system, transfer of DU archives to the target system and finally import of the archive objects and activating them. The activation process can fail (for whatever reasons), but in many cases it nevertheless means that the objects are already in the target system. For such cases the transport result code gets a special value (8 – activation errors) and once the problems are fixed (either via another transport or manually), the failed objects can be reactivated.

For transporting of active objects HALM offers currently only one type of transport: “Complete Delivery Unit”. In details it means that for each package assigned to the DU all the active objects are exported into the DU archive (tgz) and importing of this archive in the target system overrides there the DU objects. So for example, transport of the DEMO_DU from a source system into a target system where DEMO_DU already exists (see the picture below will delete demo.aaa.AA.xsjslib object and demo.ccc package with all its content.

Pic3.jpg

One important thing I still would like to mention about HALM transport behavior is transport of dependent delivery units. Let’s say, you defined some privileges in your DEMO_DU (demo.aaa..xsprivileges), but another application (demo.bbb.Tester.hdbrole) referring your privileges belongs to another DU (INTERNAL_DEMO_DU):

DEMO_DU (demo.aaa), INTERNAL_DEMO_DU (demo.bbb)

Pic4.jpg

If the INTERNAL_DEMO_DU is transported first, the activation would fail as the required .xsprivileges object is missing in the target system. Defining two DUs to the same transport route and executing such transport will import both delivery units in a single call, so that objects from both DUs could be successfully activated (even if there cycle object dependencies).


Transport of released objects

 

Not always transport of all active objects belonging to a delivery unit is a good idea. Very often developers work hard on their objects activating them multiple times until they are really ready for migration. Quite common is a situation when some DU objects are already ready for transport, but some of them still not. This is one of major scenarios where you can benefit from the enabled in the (source) system Change Recording feature. The HALM administrator can enable the Change Recording in HALM to record all the changed objects. More details you can find in the official documentation (Chapter 12.7). Once the Change Recording is enabled, all the object modifications have to be assigned to a change. The objects are locked in an open change until all change contributors approve their modifications and the change is finally released. The new “released” state of an object brings additional flexibility to your transports: - only released objects are exported and finally transported to a target system. The object versions which are still not good enough (not yet released) are not transported. The “All Changelists” type of a transport route from a system with enabled Change Recording will transport all the released DU objects.

Pic5.jpg

Transport of released changes

 

With transport of released changes you can reach even more flexibility in your transport logic. If the Change Recording is enabled in your source system, you can choose in HALM which released changes you would like to transport. You need to create a transport route of “Selected Changelists” type assigning one or more delivery units. When you initiate transport for such route, the HALM finds all the released for the specified delivery units, but not yet transported changes.

Pic6.jpg

In the list you can see the released changes with information about which delivery units are affected by a change and when a change was released. You can also see object details for each change.

When you select one or more changes, the HALM calculates all their predecessors. The predecessors in context of HALM are changes released earlier than the selected changes and containing objects from the same packages. For example, having 3 released changes like below:

Pic7.jpg

and choosing the Change 3 for transport, finds Change 1 as predecessor because it contains objects from the same package (demo.bbb). In HALM you have only possibility to transport the selected changes together with their predecessors. As soon as the changes are successfully transported (without activation errors), the HALM stores information about which change was transported as the last one for each package. Once transported changes will not be longer presented in the list of available for transport changes.

Here I also would like to mention one detail which is very important. When the Change Recording is activated in a system, all the active objects in the system becomes part of so called “base” change which is released automatically. The base change is initially always a predecessor of any released changes (per DU) visible in the HALM wizard, even if it’s not shown there. Other words, if your DU contains already 100 objects at the time when you enable Change Recording, all the 100 objects are “released” with the “base” change. If you after that modify an object, assign it to a change, release the change and finally transport it, all the 100 objects released with a “base” changes will be transported together (as predecessor). In the ideal case you start developing of you DU objects after Change Recording is already activated.

 

References

[1] SAP HANA Application Lifecycle Management in the HANA Developer Guide:http://help.sap.com/hana/SAP_HANA_Developer_Guide_en.pdf (Chapter 12);

[2] Change and Transport System: http://scn.sap.com/docs/DOC-7643

Demystifying nZDM (Near Zero Downtime) Part 1 : SAP Updates/Upgrades Fundamental Understanding

$
0
0


Our SAP Upgrade tools. SPAM/SAINT, EHPI, SAPUp and then came SUM. The Shadow system technique. Downtime minimization methods. And now, the Near Zero Downtime. The upgrades keep evolving. With every passing day, simple innovations in the SAP upgrade approaches and methods move forward with just one simple goal - Lets bring down the user lock time. Needless to say, maintenance cycles should minimally impact the business operational times. Period. But where does this all start. How does the updates work and what all these fancy new terms bring to the table everyday. With this multi part blog I intend to understand and define exactly that. Finally reaching a consensus on the nZDM approach of SUM and how it works. Lets start by clearing the basics.

 

What is a Support Package?

 

The most simplistic view of Support packages is that they are just Transports. A whole bunch of Notes (Repository Objects) with Relevant Structures (Dictionary objects) and Modifications. All workbench requests that we are importing with interdepencies in a prepared Queue. Nothing more.

 

Now, normal TRs never require downtime but since all of *these* Transports are SAP Standard Objects (Tables, Structures, Programs, Classes), the entire base of components of your system are being modified, so its "downtime" for users. Although the system stays mostly up, unlike upgrading Kernel, where its all down.

 

 

ABAP Dictionary and ABAP Repository?


Now, firstly it is important to understand the difference and inter-relation between SAP repository and dictionary as various phases of upgrades (Including SPDD and SPAU Manual Activity) are just dependant on this.

 

Data repository in SAP is the central store of all Development objects. Packages, Classes, Programs, Function modules, screens, menus and also the Data dictionary (DDIC) objects (Tables, structures, views, Data elements). In effect, ABAP Dictionary is the sub-set of ABAP repository by definition. But Data dictionary is also the base of all ABAP.  It is the metadata to describe all other data in the repository. So, All the Programs, Classes, menus etc. have no meaning without the tables, structures, views etc. defining them. So, this subset of Repository also forms the ground of all other remaining repository objects.

For more detailed definition, a good link and quick read : http://www.stechies.com/difference-bw-data-dictionary-data-repository/

 

Lets go Legacy : SPAM

 

Now, if we take a very simple straightforward SPAM approach without all the advancements of SUM. We go into the system (DDIC/000), Define a queue, Lock all dialog users, and import it. All SPs get imported in sequence. Interdependencies are taken care of automatically through programmed excellence of SAP update tool. All is Well in the world

 

But what happens behind the scenes? Below is a very basic walkthrough of the steps:

 

  • Firstly all packages are disassembled. The data and co files of the Transport Requests are unpacked from the Parcel files. The system is checked on whether their are any conflicts with the objects open repairs or locked in developments still not released. Test Imports take place to validate the same.

 

  • Now, before Actual import, Object list of the SPs is generated. Objects are divided into Dictionary Objects and Repository ones. Since the Dictionary objects form base of the repository, They need to be imported first.

 

  • The First Major Step. DDIC Objects are imported. New Data definitions and structures. It is in this step, that while import, SPDD (Dictionary modifications) may come up. Basically, if the Dictionary objects are edited or customized in the system - the upgrade stops, until a decision is taken on whether to lose or retain the changes using transaction SPDD.

 

  • Table Conversions. Ok, so in the target SAP Version or SP level we are going to, a particular Application table may have a new index, or a completely new structure. All these conversions to target structure take place here. Point of note : If its a big upgrade, and changes are big like table field conversions, each existing row of the table has to be converted. Now, what if there is huge data in these application tables? The runtime of this phase increase, increasing our overall downtime and associated business woes. The fact makes these conversion a target of technological advancements (Incremental Conversions / Change Record and Replay Framework etc. - The Fancy terms in SUM downtime minization methods) in SAP update tools options to bring down the business downtime.

 

  • DDIC Activation. After the import of All dictionary objects, they need to be activated. Their associated runtime objects are generated in this phase. Relevant Nametab entries are made for table and field definitions with the activation and the system is ready to import remaining repository objects in the updates.

 

  • Now, Proper import of all remaining repository objects viz. programs, menus, screens, etc. take place. Another main phase of the upgrade. It is here that SPAU (Repository modifications) comes up.  Contrast to SPDD, this activity can also be done after the upgrade up until a fortnight without access keys for the involved objects. Difference between SPDD and SPAU is the same as Dictionary and Repository.

 

  • XPRAS and After Import Methods. From what I understand, if its a single transport import, or any form of upgrade or update, there is always an associated ABAP program or FM that is executed afterwards for adjustments, data mergers or alignments of SAP shipped customizings. The XPRAS are AIMs are these reports (Good food on XPRAS : http://wiki.scn.sap.com/wiki/display/TechTSG/XPRAS+phases#)

 

  • Standard cleanup of old objects, obsolete versions, and final queue check.


All of the above Phases (except maybe the first and the last one) are downtime relevant. So, once started, the system is locked for Bob, no matter how urgent is that one cost statement that he so wanted from the system but unable to take as he was on leave while downtime notifications were sent to end users

 

How to help Bob in future? Use SUM, of course. Next part of the blog, we discuss how all/most of these phases run peacefully using SUM, while Bob still plays profit loss monopoly in a fully productive system.

 

All of the detail phase names along with more detail can be gained on the following link : https://help.sap.com/saphelp_crm50/helpdata/en/3d/ad5d384ebc11d182bf0000e829fbfe/content.htmand also by executing report "RSSPAM10" in any ABAP system.

SAP HANA Client Software, Different Ways to Set the Connectivity Data

$
0
0

An SAP ABAP system needs connectivity data to logon to its database. In case of SAP HANA there are three ways to set up the connectivity data:

  1. Local hdbuserstore container,
  2. Global hdbuserstore container and
  3. ABAP Securestore (rsdb_ssfs_connect)


This post describes the different possibilities.

 

Local hdbuserstore container


The local hdbuserstore container is available since the beginning of SAP HANA. It is used in all versions of software provisioning manager and also in the Database Migration Option (DMO) of Software Update Manager (SUM).

It is the default when you are doing an installation or migration towards SAP HANA. It means one hdbuserstore is created for each host you are doing an installation of an ABAP instance.

It is placed in the home directory of the user or in the home area of the Microsoft Windows registry.

hdbuserstore_linux.PNG

The hdbuserstore is placed in the home directory of the user in the subfolder .hdb/`hostname` That means even if user cooadm has a shared home directory, every host will have its own hdbuserstore.

hdbuserstore_win.PNG

On Microsoft Windows, the hdbuserstore is stored in the Windows Registry.

 

The hdbuserstore is used by the SAP kernel tools without further options and by the SAP HANA client tools like hdbsql using the option -U <ENTRY>

hdbsql_DEFAULT.png

The connect method of R3trans can be traced by checking the logfile trans.log.

r3trans_hdbuserstore.png

The disadvantage of this method is that there is one hdbuserstore-container on each SAP application server. That means if you want to change the connectivity data, you have to logon to every server of the system.

 

Global hdbuserstore container

 

Since SAP HANA Client Software Revision 93, there is a chance to put the hdbuserstore container into a central place.

The name for this is HDB_USE_IDENT and it is only available on Unix/Linux. It is an environment variable whose value replaces the hostname as foldername. The hdbuserstore is still in the subfolder ./hdb of the user's home directory. HDB_USE_IDENT is the successor of the method which is using a file called installation.ini to set a folder name by using a virtual hostname.

hdbuserstore_hdb_use_ident.png

By using this method, a global identifier can be used to have only one hdbuserstore in a shared home directory of user <sid>adm.

 

In case you want to use this feature right from the installation of the system, you have to use at least the software provisoning manager 1.0 SP7 PL7

When you start the installation, you have to add the following parameter to the command line:

 

/sapdb/DVDs/IM_LINUX_X86_64/sapinst HDB_USE_IDENT=SYSTEM_COO

 

The value of parameter HDB_USE_IDENT can include every character, numbers '-' or '_'

By using this option a special profile parameter will be set in profile DEFAULT.PFL:

 

DEFAULT.PFL
...

dbs/hdb/hdb_use_ident=SYSTEM_COO

...

 

The profil is parsed by the database-specific login scripts .dbenv*sh:

 

.dbenv*.csh
...
# set HDB_USE_IDENT for alternative userstore folder

if(-f /usr/sap/"$SAPSYSTEMNAME"/SYS/profile/DEFAULT.PFL) then

  set hdb_use_ident = `awk -F= '/^dbs\/hdb\/hdb_use_ident/ {print $2; exit 0}' /usr/sap/"$SAPSYSTEMNAME"/SYS/profile/DEFAULT.PFL`

  if ( $hdb_use_ident != "") then

    setenv HDB_USE_IDENT $hdb_use_ident

  endif

endif

 

.dbenv*.sh
...

# set HDB_USE_IDENT for alternative userstore folder

if [ -f /usr/sap/"$SAPSYSTEMNAME"/SYS/profile/DEFAULT.PFL ]; then

  hdb_use_ident = `awk -F= '/^dbs\/hdb\/hdb_use_ident/ {print $2; exit 0}' /usr/sap/"$SAPSYSTEMNAME"/SYS/profile/DEFAULT.PFL`

  if [ $hdb_use_ident != ""]; then

    export HDB_USE_IDENT=$hdb_use_ident

  fi

fi


Login scripts with these entries are placed on the software provisioning manager's DVD in folder COMMON/INSTALL/HDB. They are put into the home directory of user <sid>adm during the installation.

 


ABAP SSFS Securestore

The ABAP Securestore in general is a database-independent functionality to save data. It is placed within the SAP system. For more information, check SAP Note 1639578. With SAP Kernel version 7.42 PL 101, this functionality is also available for SAP HANA.


abap_ssfs.pmg.PNG

This feature is also supported by the software provisioning manager 1.0 SP7 PL7. To enable the feature, you have to call sapinst with the following parameter:

 

/sapdb/DVDs/IM_LINUX_X86_64/sapinst HDB_ABAP_SSFS=YES

 

Right now, there is a 7.42 Kernel DVD available, but it doesn't contain the necessary patch level for the SAP Kernel. Due to this, you will run into an error during the step testDatabaseConnection:

 

r3load_testconnect.png

At this point, you can logon as user <sid>adm, transfer to the exe directory and extract the newer SAP kernel.

extract_kernel.png

After the extraction of the new kernel, you can simply press Retry and the installation will continue. In case you are doing a migration or a system copy, you can even add the Kernel archives during the dialog phase to have them extracted automatically.

For more information about how to migrate an existing system to ABAP SSFS, check SAP Note 2154997.

 

Keep in mind that only tools of the SAP kernel are able to read from the ABAP SSFS securestore. That means that SAP HANA client tools like hdbsql cannot use the ABAP SSFS. So in case you want to use them, you might want to choose one application server where you still maintain one hdbuserstore container.

 

The following picture provides an overview about how the connectivity data of the different container is accessed.

sap_appl_server.png

In case the ABAP SSFS is used, the database-specific library dbhdbslib reads the connectivity data from the ABAP secure store and sends the data to the SAP HANA client. The SAP HANA client will then not read the connectivity data from the hdbuserstore container.


Near-Zero Downtime Maintenance for High Availability SAP Java Systems

$
0
0

Overview

Being up-to-date with the latest technologies and aiming the production stability and consistency are the top priorities for most businesses. However, that often requires business critical downtime for production systems. SAP offers a solution to this problem by providing its clients with the near-Zero Downtime Maintenance (nZDM) for SAP Java procedure for its SAP Java systems.

This blog gives an overview of the nZDM Java solution and the specifics of performing it on High Availability (HA) SAP Java systems.

 

What is the nZDM Java procedure?


The near-Zero Downtime Maintenance for SAP Java is a procedure that enables you to perform maintenance activities with greatly reduced business downtime on SAP Portal, SAP Process Orchestration (PO) and SAP systems that have only some of the usage types part of SAP Process Orchestration. nZDM Java deliveries are part of SAP SL Toolset program.

The nZDM Java procedure allows SAP customers to perform all maintenance activities on a copy or clone of the system (also called ‘target system’ in the context of nZDM) while the production system (called ‘source system’) is still in use.


More information about the nZDM solution can be found in the following pages:

 

Approaches for performing nZDM Java


Depending on the client’s system landscape and complexity, there are two main approaches for performing nZDM Java. Both rely on the copy or clone of the production system and include some basic common steps but the finalization steps differ.

Below the different approaches are explained via diagrams. The following legend describes the meaning of the different colors:

legend2.jpg

 

  • System switch

This approach is suitable for virtualized systems as well as HA systems. The system’s clone will become the new production system. Before replacing the production system with the target system, testing the updated clone is possible without downtime. This happens in a special ‘nZDM test mode’.

The technical downtime is approximately the time needed to restart the system. It varies depending on the type and load of the system.

These are the generic steps of performing a ‘System Switch’ on a standard Process Orchestration system:

 

0.     Initial State

The system on which we perform the ‘System Switch’ is a SAP Process Orchestration (PO) system consisting of several components: one or more Java Application Servers (AS), SAP Web Dispatcher (WD), SAP Central Services (SCS), SAP Enqueue Replication Service (ERS) and a database.

001-ss0_1.jpg

1.     Preparation

Prepare the production system for nZDM Java: configure database settings, deactivate background jobs, set the landscape directory (SLD) to read-only mode, download and run the nZDM Java GUI on a separate host.


2.     Connect the nZDM Java GUI to the source (production) system

Connect the nZDM Java GUI to one of the application servers of the source system.

002-ss1.jpg

3.     Start recording

Initialize recording of the database changes on the production system.


4.     Clone the source (production) system

Clone the source (production) system to create a target system.

003-ss2.jpg

5.     Isolate the cloned system

Configure the isolation (network fencing) of the target system to avoid conflict with the source (production) system.


6.     Start the target system

When the isolated clone is started, it starts as a target system.

004-ss3.jpg

7.     Update the target system

Perform the desired maintenance activities on the target system.

005-ss4.jpg

8.     (optional) Test the updated system

The updated target system can be tested before replacing the production system. No production system downtime is required for while testing on the target system. However, before performing tests it is important to do a backup of the target system. After the tests, the target system is restored to that backup.


9.     Connect nZDM Java GUI to the target system

006-ss5.jpg

10.   Start DB data replication from the source system to the target system

Replicate all available data changes from the source’s DB to the target system.

007-ss6.jpg

11.   Stop source system / enter downtime phase

To replicate the last data changes the source system is stopped, starting the downtime phase. Only the DB of the source system remains active so that the replication can be finished.

This phase is activated via the nZDM GUI.

008-ss7.jpg

12.   Stop and unfence target system

After finishing the replication, the updated target system is stopped and unfenced. Once unfenced, it can be started.

009-ss8.jpg

13.   Start updated system / end of downtime phase

After starting the updated system, it replaces the original and becomes the new production system.

010-ss9.jpg


  • Database switch

Recommended when there is no virtualization of the system and not suitable for High Availability systems. It uses the newly created database on the existing source system DB host. The data is cloned on the existing database server or on a shared storage. This approach is useful when the existing productive environment has a large scale and complex configurations. Therefore it might be the least costly way to use the already existing environment.


 

 

nZDM Java on High Availability System

 

Depending on the scenario and the suitable approach for it, the steps of the nZDM Java procedure may differ.

This section contains step-by-step descriptions, of two successfully performed scenarios of nZDM Java on HA systems. The first scenario demonstrates the above-mentioned ‘Cluster Clone’ and for the second a split of the cluster is performed.


 

nZDM Java on High Availability System: Cluster Clone

 

This approach for nZDM Java on HA system is more suitable for virtualized systems or other environments where making a clone or a copy of the system can be easily performed. With the HA ‘Cluster Clone’ the update is performed on a clone / copy of the production system. During the maintenance performed on of the clone (target system), the production system (source system) is active. The downtime is required for completion of the replication of data changes from source to target and takes about one system restart time.

These are the generic steps of performing a ‘Cluster Clone’ on a two-node HA system setup:

 

0.     Initial State

The system on which we perform the ‘Cluster Clone’ is a High Availability / Disaster Recovery SAP Java system with two nodes and multiple application servers. Each node may have SAP Web Dispatcher (WD), SAP Central Services (SCS), SAP Enqueue Replication Service (ERS) and a database. The nodes are bound together in a HA cluster via HA software.

There is only one active WD, SCS, ERS and DB instance in the cluster of nodes at any given moment. If an active instance fails, the inactive activates and replaces it.

011-hass0.jpg

1.     Preparation

Prepare the production system for nZDM Java: configure database settings, deactivate background jobs, set the landscape directory (SLD) to read-only mode, download andrun the nZDM Java GUI on a separate host.

 

2.     Connect nZDM Java GUI to the source (production) system

Connect the nZDM Java GUI to one of the application servers of the source system.

012-hass1.jpg

3.     Start recording

Initialize recording of the database changes on the production system.


4.     Clone the source (production) system

Clone the whole cluster of nodes to create a target system.

013-hass2.jpg

5.     Isolate the cloned system

Configure the isolation (network fencing) of the target system to avoid conflict with the source (production) system.


6.     Start the target system

When the isolated clone is started, it becomes our target system.

014-hass3.jpg

7.     Update the target system

Perform the desired maintenance activities on the target system.

015-hass4.jpg

8.     (optional) Test the updated system

The updated target system can be tested before replacing the production system. No production system downtime is required for while testing on the target system. However, before performing tests it is important to do a backup of the target system. After the tests, the target system is restored to that backup.


9.     Connect nZDM Java GUI to the target system

016-hass5.jpg

10.   Start DB data replication from the source system to the target system

Begin replication of data changes from the source’s DB to the target system.

017-hass6.jpg

11.   Stop source system / enter downtime phase

To replicate the last data changes the source system is stopped, starting the downtime phase. Only the DB of the source system remains active so that the replication can be finished.

This phase is activated via the nZDM GUI.

018-hass7.jpg

12.   Stop and unfence target system

After finishing the replication, the updated target system is stopped and unfenced. Once unfenced, it can be started.

019-hass8.jpg

13.   Start updated system / end of downtime phase

After starting the updated system, it replaces the original and becomes the new production system.

020-hass9.jpg

 

 

nZDM Java on HA System: Cluster Split


This is an alternative approach for nZDM Java on HA systems having limited hardware resources at their disposal. It is suitable for non-virtualized systems. With the ‘Cluster Split’ instead of making a clone of the system including all of its nodes (like we do with ‘System switch’), we split the cluster of nodes and use one of the already available cluster nodes as a target system. The other node serves as a source system and remains active until the finalization phase of the nZDM Java procedure. The downtime takes about one system restart time but recreation of the HA setup may need additional planning, time and efforts.

The following describes the generic steps of a ‘Cluster Split’ nZDM Java approach on a two-node HA system:

 

0.     Initial State

The system on which we perform the ‘Cluster Split’ is a High Availability / Disaster Recovery SAP Java system with two nodes and multiple application servers. Each node may have SAP Web Dispatcher (WD), SAP Central Services (SCS), SAP Enqueue Replication Service (ERS) and a database. The nodes are bound together in a HA cluster via HA software.

There is only one active WD, SCS, ERS and DB instance in the cluster of nodes at any given moment. If an active instance fails, the inactive activates and replaces it.

021-hacs0.jpg

1.     Preparation

Prepare the production system for nZDM Java: configure database settings, deactivate background jobs, set the landscape directory (SLD) to read-only mode, download and run nZDM Java GUI.

 

2.     Connect nZDM Java GUI to the source (production) system

Connect the nZDM Java GUI to one of the application servers of the source system.

022-hacs1.jpg

3.     Start recording

Initialize recording of the database changes on the production system.


4.     Split the cluster / break HA

Remove from the cluster the node that will be used as a target system.

023-hacs2.jpg

5.     Isolate the removed node

Configure the isolation (network fencing) of the removed node to avoid conflict with the source (production) system.


6.     Add one Java Application Server (AS) to the removed node

Add an application server to the node that was removed from the cluster and create new Java system (target). That application server will be used for the connection between the target system and the nZDM Java GUI.



7.     Start the target system

When the removed node is started it starts as a target system.

024-hacs3.jpg

8.     Update the target system

Perform the desired maintenance activities on the target system.

025-hacs4.jpg

9.     (optional) Test the updated system

The updated target system can be tested before replacing the production system. No production system downtime is required for while testing on the target system. However, Before performing tests it is important to do a backup of the target system. After the tests, the target system is restored to that backup.


10.   Create cluster from the target system

Configure a HA cluster from the target system.


11.   Connect nZDM Java GUI to the target system

Connect the nZDM Java GUI to the application server, added to the target system.

026-hacs5.jpg

12.   Start DB data replication from the source system to the target system

Replicate all available data changes from the source’s DB to the target system.

027-hacs6.jpg

13.   Stop source system / enter downtime phase

To replicate the last data changes the source system is stopped, starting the downtime phase. Only the DB of the source system remains active so that the replication can be finished.

This phase is activated via the nZDM GUI.


14.   Stop and unfence target system

After finishing the replication, the updated target system is stopped and unfenced.


15.   Reconfigure application servers

Reconnect the application servers to the updated system.

030-hacs8.jpg

16.   Recreate the high availability system

To complete the process we must recreate the high availability setup of the production system. This is done by adding the former source system to the cluster we created with the updated system. Once the nodes are synced (via the HA software) the HA setup is restored and the system is successfully updated.

031-hacs9.jpg


* The steps described above are highly dependent on the system’s specific setup. Therefore on a different system than the one used in this scenario there will be differences in the steps.

 

Conclusion

 

The nZDM Java procedure offers a flexible and effective solution for updating SAP Java systems. It also supports various High Availability landscapes of SAP Java systems. The main benefits of the procedure are the significantly reduced downtime and the ability to test the updated system before making it a production system, minimizing the risk of possible problems.



Further Information

Central note for nZDM Java for SP13

Central note for nZDM Java for SP12

Minimizing planned downtime during maintenance

nZDM Java for EP user guide

What is High Availability system?

Improved Log File for Software Provisioning Manager

$
0
0

A very warm welcome to all technical experts and users of the Software Provisioning Manager!

 

To increase the supportability of processes performed with Software Provisioning Manager, the log file sapinst_dev.log was improved, based on received feedback, as outlined in this blog.

 

Let me start with a simple example – first, here is an extract of a log file provided until now:
01.jpg

 

And now the same information from a new log file:
02.jpg

 

What are the main changes seen here?

  • The entry offers a more compact view, with only one header line
  • The effective user/group is traced on each line (only listed generically as <Domain>\<User> in the example screenshot above, but I hope you get the idea )
  • Useless details got removed, while the information about the used libraries in the example above (lib=syslib module=syslib) is just one sample – overall, a lot of useless traces could just be taken out

 

In addition, the following improvements were realized:

  • A possible JavaScript code dump gets now logged into a separate log file (js_dump.log), keeping the sapinst_dev.log file smaller and cleaner
  • The log file now also traces the exact path of the provisioning service as you had selected it on the Welcome screen in the tool - for example:
    03.jpg
  • Finally, Software Provisioning Manager traces the CD labels that it found:
    04.jpg

 

Overall, we hope that all these small changes make the overall handling of Software Provisioning Manager – especially in the case of issues – easier for you.

 

The shown improvements are made available with SAPinst 720-2 patch 2015.04 (for more information, see SAP Note 1548438), which is part of Software Logistics 1.0 SPS13.

 

Thoughts for further improvements? Please submit an idea in our Software Logistics space in SAP Idea Place under the category Solution Implementation.

SL Toolset 1.0 SPS 13: improved Software Logistics Tools

$
0
0

This blog describes the new and improved tools in the SL Toolset 1.0 with SPS 13.
    You should be familiar with the concept of the Software Logistics Toolset 1.0 ("SL Toolset"), see
      The Delivery Channel for Software Logistics Tools: "Software Logistics Toolset 1.0"

 

 

Overview on tools delivered with SL Toolset 1.0 SPS 13

 

Availability: SL Toolset 1.0 SPS 13 is available since April 27th 2015.

 

What's in:

  • compared with SPS 12, no new tool joined the SL Toolset 1.0
  • existing tools are improved and updated: some tools are delivered in a new SP, some without (when only minor fixes where done)
  • Most of the tools offer a feedback form to provide both statistical data as well as individual feedback


sl_toolset_sps13_tool_overview.jpg

Further information on the SL Toolset SPS 13:

  • SAP Note 2031385 (Release Note for SL Toolset 1.0 SPS 13; logon required)
  • Quick link /sltoolset on SAP Service Marketplace (logon required)
  • Idea Space for the Software Logistics Toolset and its tools

 

 

"nZDM for SAP NetWeaver Java" 1.0 SP13

 

Offering

  • implement Support Packages and patches for SAP Java-stack systems with minimal technical downtime
  • Target Products:
    • SAP NetWeaver Portal 7.02, 7.3x, 7.4
    • on request: SAP Business Process Management and SAP Process Orchestration
      releases 7.3 incl. EHPs, and 7.4 (see SAP Note 2039886; logon required)

Changes with SL Toolset SPS 13

  • Manually applying custom settings to the source system is no longer required. The nZDM Java-specific custom setting are now automatically applied during the first start of the target system. PI communication channels, background jobs are suspended on the target system automatically as well. The values of the original system settings are restored during the finalization of the nZDM Java procedure

More information

 

"nZDM for SAP Process Integration" 1.0 SP07

 

Offering

  • implement Support Packages for SAP Process Integration (SAP PI) with minimal technical downtime of appr. 30-60 minutes
  • Target Products: SAP PI dual stack 7.10, 7.11, 7.30, 7.31

 

Changes with SL Toolset SPS 13

  • No changes

 

More information

 

 

Software Provisioning Manager 1.0 SP 08

 

Offering

With software provisioning manager, get latest SAPinst version that enables provisioning processes for several products and releases for all supported platforms – get support of latest products, versions and platforms, including latest fixes in tool and supported processes + benefit from unified process for different product versions.

 

Changes with SL Toolset SPS 13

  • Option to increase security by restricting access to the Message Server via Access Control List
  • Further improvements concerning unified consumption experience, offering an up-to-date installation optionally including the following activities:
    • Single System Transport Management system configuration and include ABAP transports
    • SPAM/SAINT update
    • Starting of Software Update Manager
  • System copy:
    • Option for parallel execution of size determination (R3szchk) of source data during export now available for all supported databases but SAP ASE
    • Further improvements for a migration to SAP HANA (such as option to increase export performance by using more than one SAP application server of source system for export)
  • System rename now also supported for Java systems based on:
    • SAP NetWeaver 7.10 SP 05 and higher, SAP NetWeaver 7.11, SAP NetWeaver 7.2

 

More information

 

 

Software Update Manager 1.0 SP 13

 

Offering

  • Consolidation of different software logistics tools into one unified software logistics tool
  • Runtime reduction: Higher degree of parallelization for certain phase types
  • Downtime reduction: Enhanced Shadow System capabilities for specific use cases
  • Combine SAP system update with migration to SAP HANA (DMO: database migration option)

 

Changes with SL Toolset SPS 13

  • AS ABAP, AS Java: optional usage of new user interface (SAPUI5 based)
  • Business downtime minimization: import of customer transport requests in SUM (available on request)
  • Execute import of business transport requests exclusively in SUM (available on request)
  • DMO: support for start release SAP R/3 4.6C
  • DMO: benchmarking tool for testing migration performance prior to DMO

 

More information

 

 

Standalone Task Manager for Lifecycle Management Automation 1.0 SP 01


Offering

Stand-alone task manager for lifecycle management automation is a framework to execute below automated configuration templates

  • SSL configuration templates validates the SSL configuration settings both, for ABAP and for Java environments and generates HTML reports that can be used for further analysis. It also performs SSL configuration automatically and describes required manual tasks.(SAP Note 1891360)
  • SAP ERP <-> SAP CRM, template to establish connectivity between SAP ERP system and SAP CRM
  • Mobile Configuration templates for Backend, Gateway and SUP (SAP Note 1891358)
  • HANA user management and SLT (System Landscape Transformation) configuration (SAP Note 1891393)

 

Changes with SL Toolset SPS 13

  • No changes


More information

 

 

SAPSetup 9.0


Offering

SAPSetup offers easy and reliable functionality for installations of different scales:

  • Installation of frontend products without administrator permissions
  • Remote installations from Administration PC
  • Configuration and export of installation packages containing multiple products
  • Consistency check
  • Central log file analysis

 

Changes with SL Toolset SPS 13

  • SAPSetup with the latest corrections as outlined in the SAP Notes below

 

Further information:

 

 

CTS Plug-In 2.0 SP15


Offering

  • Generic CTS to connect your non-ABAP applications with CTS
  • New user interfaces and new features for CTS
  • Central Change and Transport System (cCTS) as technical infrastructure for Change Request Management (ChaRM) and Quality Gate Management (QGM) in SAP Solution Manager 7.1 SPS 10 and higher

Changes with SL Toolset SPS 13

  • Improvements for CTS Plug-In are no longer delivered with SL Toolset, but will come with the respective SAP NetWeaver support packages, for more information see SAP Note 1665940

More information

 

AddOn Installation Tool and Support Package Manager

 

Offering

SPAM/SAINT provides easy access to lifecycle management processes by being part of the SAP NetWeaver AS ABAP stack and by being accessible directly via SAP GUI.  This way you are able to control different kinds of implementation processes, such as installing, upgrading or updating ABAP software components. SPAM/SAINT Updates themselves can be applied to ABAP-based systems independent of underlying SAP NetWeaver component versions.

 

Changes with SL Toolset SPS13

  • Test-scenario for Add-On Installation/Deinstallation
  • Improvements in Add-On Deinstallation
  • SPAM feedback form improved
  • Fixes for SUM (see SAP Notes 2144370 and 2143454 for details)

 

More information

 

 

Scenario "Unified Consumption Experience"

This scenario is not a new tool, but important to mention:
Unified Consumption Experience (UCE) aims at simplifying the process of installation for a new system on a specific target software level.

See the following blog for more information: Unified Consumption Experience

 

Boris Rubarth

Product Management SAP SE, Software Logistics

How to Upgrade SAP Systems using SUM Tool

$
0
0

This blogs will guide you step by step how to upgrade SAP systems using SUM Tool for different SAP Product systems.

 

Software Update Manager

The Software Update Manager (SUM) is a multi-purpose tool that supports various processes, such as performing a release upgrade, installing enhancement packages, applying Support Package Stacks or updating single components on SAP NetWeaver.

 

Planning

Before you start the actual upgrade, you have to plan it carefully so that downtime is reduced to a minimum and the upgrade runs as efficiently as possible. We recommend that you start planning your update at least two weeks before you begin it.

 

Software Update Manager ( SUM )

Software Update Manager (SUM) is the tool for system maintenance: Release upgrade, EHP implementation, SP stacks implementation for SAP NetWeaver based systems, DMO. SUM is delivered with Software Logistics Toolset 1.0 and can be downloaded from the link:

http://service.sap.com/sltoolset 

New SUM patches are released frequently with latest features and fixes for known bug.

Upgrade guides

The Master Guide (Upgrade Master Guide) takes you through the complete update and references the required documentation for each step. It is essential to read the upgrade guide & master guide for your product version before starting the upgrade.

If there are preparation and follow-up activities for the upgrade that are specific to your product, they are described in a product-specific document. This document is also referenced in the Master Guide (or Upgrade Master Guide).

The required guides can be downloaded from the link:

http://service.sap.com/instguides

SUM guide

Please consider as well that each Operating System and Database combination have their specific SUM guide, available at:

https://service.sap.com/~sapidb/011000358700000783082011E/SUM10_Guides.htm 

Alternatively you can follow this path:

http://service.sap.com/sltoolset  -> Software Logistics Toolset 1.0 -> Go to the bottom of the page -> Expand “System Maintenance” -> Updating SAP Systems Using Software Update Manager 1.0 SP<XX>

SAP Notes

To prepare and perform the update of your SAP system, it is required to verify additional information, not included in the guides. This information is in a range of SAP Notes in SAP Support Portal, which you have to read before you start the preparations.

We recommend to access the following SAP Notes from SAP Support Portal before you start the update procedure:

Central Software Update Manager Note

SAP Note for your database

SAP Note 1940845 - MOpz: enhancement to support new backend services

DMO Central Note 1813548– in case you are using the Database Migration Option (DMO):

These SAP Notes are updated regularly, make sure that you always use the newest version.

The keyword for performing the upgrade in confirm target roadmap is available in the Software Update Manager note. It is not possible to continue with the upgrade without this keyword.

Additional SAP notes may be required. It can be downloaded using the link:

http://service.sap.com/notes

Hardware Requirements

Before starting the upgrade, it is mandatory to check the CPU, main memory, disk space and page file.

For more information please refer the link:

https://service.sap.com/sizing

Free Disk Space Requirements

Disk space in the file system for the SUM directory, the download directory, and directory DIR_TRANS. The space required depends on the product you are updating.

The Software Update Manager calculates the space requirements for the database. The free space required for the database is approximately in the range from 50 to 200 GB. Please consider it can be higher, depending on your database size and structure.

SUM Directory Approximately 20 GB

Download Directory (temporary space requirement) Approximately 20 GB

DIR_TRANS Approximately 20 GB

Shadow System Approximately the space required for your source release instance, that is, the size of the following directory:

UNIX: /usr/sap/<sapsid>

Windows: <Drive>:\usr\sap\<sapsid>

IBM i: /usr/sap/<SID>

 

Upgrade of the Operating System and Database System

When you upgrade the SAP system, the target release of your upgrade may require you to update the operating system version and database version as well.

You can determine if the target release is supported on your current Operating System and Database using the Product Availability Matrix (PAM):

https://support.sap.com/pam

For upgrades including Database Migration Option, the minimum database versions can be checked in Note 1813548 - Database Migration Option (DMO) of SUM

If you need to upgrade an operating system or database, or migrate a database, then the timing and the sequence of the individual upgrades is of great importance. The procedure differs according to each database.

Please consider that upgrades from older releases may require to be executed in two steps. More details are available in the file SUM_xx_paths.pdf attached to the Central Software Update Manager Note. Cross-check your DB/OS information at PAM for such requirement.

Note:  upgrades to target 740 may require Database updates dependent on the target kernel release as well, see details in note 1969546 - Release Roadmap Kernel 740.

Software Requirements

Your SAP system should have one of the source releases that are available for your upgrade and DB/OS combination.  Different SAP NetWeaver usage types may have different minimum Support Package levels. If you upgrade an SAP NetWeaver-based system containing various usage types, make sure that your source release is on a minimum Support Package level for all usage types implemented in the system.

Please refer to SAP Note 1850327 and its references for Support Package Stack source and update your system if necessary. The correct SP level will be then calculated by Solution Manager Maintenance Optimizer. Additional patches, such as Java patches, can be obtained from:

https://support.sap.com/swdc

Typically, SAP systems like SAP ERP, SAP CRM, SAP SCM or SAP SRM are part of an SAP system landscape that contains various interconnected systems. Business processes can run across the various systems. When planning an upgrade of the systems in your landscape, if you want to know whether this has an impact on other systems in your landscape, that is, whether the upgrade requires changes to other systems in the landscape as well, please access:

http://service.sap.com/uda

Preparation

Solution Manager - Stack XML Generation

In order to update perform a Support Package Update, EHP installation or Release Upgrades, a stack XML file must be generated in Solution Manager’s Maintenance Optimizer. Landscape verification is required to enable the Maintenance Optimizer to create a proper stack configuration XML file for the correct product constellation.

To be able to generate a correct XML file for the upgrade, please make sure that you read note 1887979 carefully. Also make sure that the LMDB is updated with correct software system info.

Further information and reference:

http://wiki.scn.sap.com/wiki/x/VIwqCw 

Maintenance Planning Guide for SAP Solution Manager 7.1 SP05 and higher

http://service.sap.com/mopz

Manually Prepared Directory

If the maintenance to be performed is a Java patch import or the update of a custom component, you have to use the Manually Prepared Directory option.

For more information on this option, read SAP note 1641062 - Single component update and patch scenarios in SUM

In the SUM Update Guide, the Chapter named "Applying Single Component Updates and Patches Using a Manually Prepared Directory" have the steps to be followed and more information.

Install or Update SAP Host Agent

SAP Host Agent Version 142 or higher is required for the proper execution of the update process.

If it is included in the stack.xml, SAP Host Agent can be automatically installed only in the primary application

server host.

To manually install SAP Host Agent or update it on remote hosts, proceed as described in the SAP Library [page 15]

http://help.sap.com/

Running the Software Update Manager

 

The Software Update Manager controls the entire procedure, from checking the system requirements and

importing the necessary programs through stopping production operation until production operation is resumed. The procedure is divided up into a number of different roadmap steps. The roadmap steps are in turn divided into individual steps. The successful completion of a step is a precondition for the success of all subsequent steps.

For a complete list of all steps, see the process overview report, which you can access by choosing the GUI menu option -> Update Process Overview .

Alternatively, you can see the ProcessOverview.html file available in the directory <DRIVE>:\<path to SUM directory>\SUM\sdt\htdoc.

In case of any issues please refer the following troubleshooting guide containing information of all the Known issues during the upgrade.

Troubleshooting procedures

Performance issues

Performance during Upgrades and Enhancement Packages http://wiki.scn.sap.com/wiki/x/cAgsG

 

Troubleshooting guides

System Upgrade And Update central page – http://wiki.scn.sap.com/wiki/x/mYB5Fw

SUM for ABAP – http://wiki.scn.sap.com/wiki/x/hwGlFw

SUM for Java – http://wiki.scn.sap.com/wiki/x/TwGpFw

SPAM & SAINT – http://wiki.scn.sap.com/wiki/x/VAGpFw

 

For other errors or issues, the following SAP Knowledge Base Article can help you finding a solution:

SAP KBA 2081285 - How to get best results from an SAP search?

Follow-up activities

Please refer to the section 6 of the SUM guide for detailed information of the follow-up activities which need to be performed in the system before releasing the system for Production use.

Emergency procedures

Resetting an upgrade

Please refer to section 5.x of the SUM guide “Resetting the Software Update Manager”.

Additional information is also available in this page.

SAP Note 1790486 - SAP_ABA is in an undefined state that is not safe to be upgraded

 

Data loss after upgrade

http://wiki.scn.sap.com/wiki/x/WJOPFw

Additional resources

Continuous Quality Check & Improvement Services

You can also use some of the expert SAP Continuous Quality Checks and SAP Improvement Services during the lifecycle of your upgrade. Some of the available services are:

CQC Upgrade

CQC Upgrade Assessment

CQC Downtime Assessment

CQC Going Live Support

These services are available as part of SAP support offerings and can also be ordered as single services.

SAP Enterprise Support Academy

Browse through our catalog of videos, documents, live sessions of Quick IQ’s, Meet the Expert sessions, Expert Guided Implementations, Best Practices and much more.

In case you have any question during the upgrade, please access the Software Logistics space on SAP Community Network, where you will be able to find further information and exchange experiences with other customers and professionals.

SCN Space: Software Logistics

Best Practices for Upgrading SAP Systems

$
0
0

2.png

Software Update Manager    3.jpg

 

   The Software Update Manager (SUM) is a multi-purpose tool that supports various processes, such as performing a release upgrade, installing enhancement packages, applying Support Package Stacks or updating single components on a SAP Netweaver system.

 

    This document contain very important material and information to perform these tasks, from planning to post-update activities.


   

Read more>



Visit our SCN community >

             




PLANNING



    Before you start the actual upgrade, you have to plan it carefully so that downtime is reduced to a minimum and the upgrade runs as efficiently as possible. We recommend that you start planning your update at least two weeks before you begin it.


    Software Update Manager (SUM)


The SUM is the tool for system maintenance: release upgrades, EHP implementations, SP stack updates, Database Migration Option (DMO), among others. SUM is delivered with Software Logistics Toolset 1.0 and can be downloaded from the link:

         

        http://service.sap.com/sltoolset

         

        New SUM patches are released frequently with the latest features and fixes for know bugs.


   

    Upgrade Guides
   

The Master Guide (Upgrade Master Guide) takes  you throught eh complete update and references the required documentation for each step. It is essential to read the  upgrade guide and the master guide for your product version before starting the upgrade.

 

If there area preparation and follow-up activities for the upgrade that are specific to  your product, they are described in a product-specific document. This document is also referenced in the Master Guide (or Upgrade Master Guide).

 

The required guides can be downloaded from the link:

 

http://service.sap.com/instguides



SUM guide


Please consider as well that each Operating System and Database combination have their specific SUM guide, available at:

 

https://service.sap.com/~sapidb/011000358700000783082011E/SUM10_Guides.htm

 

Alternatively you can follow this path:

 

http://service.sap.com/sltoolset-> Software Logistics Toolset 1.0 -> Go to the bottom of the page -> Expand "System Maintenance" -> Updating SAP Systems Using Software Update Manager 1.0 SP <XX>

 


SAP Notes


To prepare and perform the update of your SAP system, it is required to verify additional information, not included in the guides. This information is in a range of SAP Notes in SAP Support Portal, which you have to read before you start the preparations.

We recommend to access the following SAP Notes from SAP Support Portal before you start the update procedure:

 

These SAP Notes are updated regularly, make sure that you always use the newest version.

 

 

The keyword for performing the ugprade in confirm target roadmap is available in the Software Update Manager note. It is not possible to continue with the upgrade without this keyword.

 

 

Additional SAP notes may be required. It can be downloaded using the link:

 

 

http://service.sap.com/notes

 

 

Hardware Requirements

 

Before starting the upgrade it is mandatory to check the CPU, main memory, disk space and page file.

 

 

Fore more information please refer to the following link:

 

 

https://service.sap.com/sizing

 

 

Free Disk Space Requirements

 

Disk space in the file system for the SUM directory, the download directory and directory DIR_TRANS. The space required depends on the product you are updating.

 

 

The Software Update Manager calculates the space requirements for the database. The free space required for the database is approximately in the range from 50 to 200Gb. Please consider it can be higher, depending on your database size and structure:

 

 

SUM Directory: approximately 20GB

 

Download Directory (temporary space requirement): approximately 20GB

 

DIR_TRANS: approximately 20GB

 

Shadow System Approximately the space required for your source release instance, that is, the size of the following directory:

      • UNIX: /usr/sap/<sapsid>
      • Windows: <drive>:\usr\sap\<sapsid>
      • IBM i: /usr/sap/<SID>

 

 

Upgrade of the Operating System and Database System

 

When you upgrade the SAP system, the target release of your upgrade may require you to update the operating system version database version as well.

 

 

You can determine if the target release is supported on your current Operating System and Database using the Product Availability Matrix (PAM):

 

 

https://support.sap.com/pam

 

 

For upgrades including Database Migration Option, the minimum database version can be checked in Note 1813548 - Database Migration Option (DMO) of SUM.

 

 

If you need to upgrade an operating system or database, or migrate a database, then the timing and the sequence of the individual upgrades is of great importance. The procedure differs according to each database.

 

 

Please consider that upgrades from older releases may require to be executed in two (or more) steps. More details are available in the file SUM_xx_paths.pdf attached to the Central Software Update Manager Note. Cross-check your DB/OS information at PAM for such requirement.

 

 

Note: upgrades to target 740 may required Database updates dependent on the target kernel release as well, see details in note 1969546 - Release Roadmap Kernel 740

 

 

Software Requirements

 

Your  SAP system should have one of the source releases that are available for your upgrade and DB/OS combination. Different SAP Netweaver usage types may have different minimum Support Package levels. If you upgrade a SAP NW - based system containing various usage types, make sure that your source release is on a minimum SP level for all usage types implemented on the system.

 

 

Please refer to SAP note 1850327 and its references for Support Package Stack source and update your system if necessary. The correct SP level will be then calculated by Solutio Manager Maintenance Optimizer. Additional patches, such as Java patches, can be obtained from:

 

 

https://support.sap.com/swdc

 

 

Typically, SAP systems like SAP ERP, CRM, SCM or SRM are part of a SAP system landscape that contains vairous interconnected systems. Business processes can run across the various systems. When planning an upgrade please refer to the following link for checking the potential impact on these connected systems:

 

 

http://service.sap.com/uda

 

 

PREPARATION

 

 

Solution Manager - Stack XML Generation

 


In order to perform a Support Package Update, EHP installation or Release Upgrade, a stack XML file must be generated in Solution Manager's MOPZ. Landscape verification is required to enable the Maintenance Optimizer to create a proper stack configuration XML file for the correct product constellation.

 

To be able to generate a correct XML stack file for the upgrade please make sure that you read note 1887979 carefully. Also make sure that the LMDB is updated with correct softwre system info.

 

For further information and reference:

 

http://wiki.scn.sap.com/wiki/x/VlwqCw

Maintenance Planning Guide for SAP Solution Manager 7.1 SP05 and higher

http://service.sap.com/mpz

 

 

Manually Prepared Directory

 

If the maintenance to be performed is a Java patch import or the update of a custom component, you have to use the Manually Prepared Directory option on the beggining of the SUM process.

 

For more information on this option, please read SAP note 1611062 - Single component update and patch scenarios in SUM

 

In the SUM Update Guide, the Chapter named "Applying Single Component Updates and Patches Using a Manually Prepared Directory" have the steps to be followed and more information.

 

 

Install or Update SAP Host Agent

 

SAP Host Agent Version 142 or higher is required for the proper execution of the update process.

 

If it is included in the stack.xml, SAP Host Agent can be automatically installed only in the primary application servert host.

 

To manually install SAP Host Agent or update it on remote hosts, proceed as described in the SAP Library [page 15]:

 

http://help.sap.com

 

 

Running the Software Update Manager

 

The Software Update Manager controls the entire procedure, from checking the system requirements and importing the necessary programs through stopping production operation until it is resumed. The procedure is divided up into a number of different roadmap steps. The roadmap steps are in turn divided into individual steps. The successfull completion of a step is a precondition for the success of all subsequente steps.

 

For a complete list of all steps, see the process overview report, which you can access by choosing the GUI menu option -> Update Process Overview.

 

Alternatively you can see the ProcessOverview.html file available in the directory <DRIVE>:\<path_to_SUM_directory>\SUM\sdt\htdoc.

 

In case of any issues please refer to the following troubleshooting guide containing information of all the known issues during upgrades.

 

 

 

TROUBLESHOOTING

 

 

Performance Issues


Performance during Upgrades and Enhancement Packages: http://wiki.scn.sap.com/wiki/x/cAgsG

 

Troubleshoting Guides

 

System Upgrade and Update central page: http://wiki.scn.sap.com/wiki/x/mYB5Fw

 

SUM for ABAP: http://wiki.scn.sap.com/wiki/x/hwGIFw

 

SUM for Java: http://wiki.scn.sap.com/wiki/x/TwGpFw

 

SPAM & SAINT: http://wiki.scn.sap.com/wiki/x/VAGpFw

 

For other errors or issues, the following SAP Knowledge Base Article can help you finding a solution:

2081285 - How to get best results from a SAP search?

 

 

Follow-Up Activities

 

Please refer to the section 6 of the SUM guide for detailed information of the follow-up activities which need to be performed in the system before releasing the system for Productive use.

 

Emergency Procedure

 

Resetting an Upgrade

 

Please refer to section 5.x of the SUM guide "Resetting the Software Update Manager". Additional information is also available on this page.

 

SAP Note 1790486: SAP_ABA is in an undefined state that is not safe to be upgraded.

 

 

Data loss after an Upgrade

 

http://wiki.scn.sap.com/wiki/x/WJOPFw

 

 

Additional Resources

 

 

 

Continuous Quality Check & Improvement Services

 

You can also use some of the expert SAP Continuous Quality Checks and SAP Improvement Services during the lifecycle of your upgrade. Some of the available services are:

 

 

CQC Upgrade

 

CQC Upgrade Assessment

 

CQC Downtime Assessment

 

CQC Going Live Support

 

 

These services are available as part of the SAP Support offerings and can also be ordered as single services.

 

 

SAP Enterprise Support Academy

 

 

Browse through our catalog of videos, documents, live sessions of Quick IQ's, Meet the Expert sessions, Expert Guided Implementations, Best Practices and much more.

 

 

In case you have any question during the upgrade, we invite you to access the Software Logistics space on SAP Community Network, where you will be able to find further information and exchange experiences with other customers and professionals.

 

 

SCN Space: Software Logistics

 

 

If you have questions or comments, please post them below.

Best Practices for SAP System Installation and Transformation

$
0
0

Software Provisioning Manager


The Software Provisioning Manager offers the execution of many system provisioning tasks and covers a broad range of platforms and products, both on the ABAP and the Java technology. Whether you are going to copy an SAP NetWeaver system, rename an SAP Business Suite system, or install a standalone engine (such as SAP LiveCache), you can handle all these tasks with the Software Provisioning Manager.


Read more>

Visit our community

computer.jpg

 

Software Provisioning Manager allows you install, copy, transform, split, rename, and uninstall products based on SAP NetWeaver AS ABAP and AS Java. For detailed information and latest updates about Software Provisioning Manager, please check SAP Note 1680045 – Release Note for Software Provisioning Manager 1.0.

 

If you are going to perform any of the mentioned tasks, please also consider taking some time to review the related set of information below:


Software Provisioning Manager


Software Provisioning Manager is the tool for system installation and transformation: Install a new SAP system or standalone engine, uninstall, copy or migrate the system to a new server with same or different OS and Database configuration, rename your SAP System or change attributes such as hostname and instance number. The Software Provisioning Manager is delivered with Software Logistics Toolset 1.0 and can be downloaded from the link: http://service.sap.com/sltoolset


New patches for the Software Provisioning Manager are released frequently with latest features and fixes for known bug.


Remark: Software Provisioning Manager makes use of the SAPinst framework and it includes the latest SAPinst version for several products and releases – for the underlying SAPinst framework, you can find more information about known issues and patches at:

 

SAP Note 1548438– SAPinst Framework 720-2 Central Note

SAP Note 929929– Latest SAPinst Patch (latest version of Software Provisioning Manager contains latest SAPinst patch)

 

Installation Guides


In the Installation & Upgrade Documentation central page you can find several comprehensive technical documents organized by area and release. Master Guides, Installation Guides, Upgrade and Configuration guides and System Copy Guides are available.

The required guides can be downloaded from the link:


http://service.sap.com/instguides


Installing and Uninstalling SAP Systems


Please always refer to the specific Software Provisioning Manager Installation Guide of your SAP System’s NetWeaver release, as it contains detailed information to plan and execute installation and uninstallation tasks:


For SAP Systems based on SAP NetWeaver 7.0x
For SAP Systems based on SAP NetWeaver 7.1 and higher

These and other installation guides can also be accessed following this path: http://service.sap.com/sltoolset -> Software Logistics Toolset 1.0 -> (scroll down to the bottom of the page) Documentation  -> System Provisioning. 


System Copy and OS/DB Migration


The Software Provisioning Manager System Copy Guides below must be used for all System Copy or OS/DB migration related activities:


For SAP Systems based on SAP NetWeaver 7.0x
For SAP Systems based on SAP NetWeaver 7.1 and higher 


To support OS/DB Migrations specifically SAP provides Migration services such as the OS/DB Migration Check. With the migration check you manage the risks involved in a migration and prepare to execute it smoothly. The OS/DB Migration Check is mandatory if you are going to migrate a productive system.


Please be aware that when performing a SAP System migration you will also need a Migration Key. This migration key can be generated online (http://service.sap.com/migrationkey), using an S-User account assigned to the source system's installation number. If you are facing any issues with the generated migration key, please see SAP Note 1899381.


Renaming SAP Systems


The System Rename activity is available for SAP Systems based on SAP NetWeaver 7.0x, 7.3 and higher. The technical guides for renaming SAP systems are available at:


Renaming SAP Systems based on SAP NetWeaver 7.0x
Renaming SAP Systems based on SAP NetWeaver  7.3 and Higher


The main SAP Note 1619720 for system rename contains remarks, annotations, and corrections discovered after the release of the original documentation and should also be checked.


Splitting a Dual-Stack System


The Software Provisioning Manager offers the capability of splitting a ABAP+Java system into two separate systems and also has features to reestablish the connectivity between the separated ABAP and Java systems for specific scenarios when required. For planning and executing a Dual-Stack Split, the documentation below serves as reference: 


Dual-Stack split for SAP Systems based on SAP NetWeaver 7.0x
Dual-Stack split for  SAP Systems based on SAP NetWeaver  7.3 and 7.31


See also:


The main SAP Note 1797362 for Dual-Stack Splitting.

Troubleshooting Procedures


Troubleshooting documents for Software Provisioning Manager – http://scn.sap.com/docs/DOC-62646

 

For errors or issues not covered in the troubleshooting page, the following SAP Knowledge Base Article can help you finding a solution:


SAP KBA 2081285 - How to get best results from an SAP search?

 

Additional Resources


Other Software Provisioning Manager related documentation not mentioned here can be accessed from the following path in SAP Service Marketplace: http://service.sap.com/sltoolset-> Software Logistics Toolset 1.0 -> Documentation -> System Provisioning.

Do you have any comments or suggestions? Please post them below.

Understanding of native transport of changes in HALM

$
0
0

The basic introduction into the transport of HANA objects I did already: Transport of HANA objects with HALM

Based on my experience with handling of customer messages, I decided to summarize key misunderstandings of using HALM changes-based transport and to provide some technical details which are (still) missing in the official documentation. Hopefully it can help the HALM users to avoid some critical situations in the future.

 

1. HALM cares about all released objects.

The statement sounds very clear, but in HALM it has a special meaning. Let's say, you installed a HANA system, created a DU, assigned some created packages to the DU and work on some objects. After some time, you decide to switch the Change Recording in your system on and after that all the object modifications are recorded in a change. The key point here is that all the existing active objects at the moment of Change Recording enabling are released in the so called "base" change. As a result, when the first change of a DU is transported, all the DU objects released in the "base" change are also transported, even if it's not shown in the list of the transporting changes.

Pict1.JPG

For many users this is not really expected, since the believe that only "manually" recorded and released changes should be transported. But for the consistency reasons it probably would be incorrect just to transport the first manually released change without the objects existed before enabling of the Change Recording.

 

2. HALM can re-transport already transported changes.

I have seen already many users complaining that HALM re-transports sometimes already transported changes. And it really does, when consistency of DU objects cannot be guaranteed in the target system. Let's take the example illustrated above:  DUA has 2 assigned packages "aaa.aa.a" and "bbb.bb.b". You transport from time to time your released changes and everything looks great because only "not-yet-transported" changes are available for the transport. But one day you notice that many of already transported changes are waiting to be transported again. This can happen for different reasons, but most usual one is reassignment of DU packages. As you know, the transport routes in HALM are defined for specified DUs. But a change can contain objects from packages of different DUs (or even not yet assigned to any DU).

Pict2.JPG

If you assign now the package "ccc.cc.c" to DUA in the source system, the changes 1, 2 and 3 have to be re-transported again. This is done because the transport archives are DU tgz files including all DU objects for the given time. If only "aaa.aa.a" and "bbb.bb.b" packages were assigned to the DUA when the changes were transported, it means that only objects of these 2 packages were transported. But after you assigned the "ccc.cc.c" package to the DUA, the entire changes have to be re-transported. Unfortunately, it is not possible to transport only objects of the "ccc.cc.c" package with HALM in this case.

Another case when already transported changes are to be re-transported with HALM is when you import a DU archive from a file system into your target system (for whatever reasons).

 

3. HALM always cares about predecessors.

It is not possible to transport a released change without transporting of existing predecessors. The predecessors are calculated in HALM based on a package level (starting since SP8) or based on a DU level (in SP7). What that means? Very simple:- if an earlier released change contains objects of the same package (package level), the change is considered to be a predecessor. So in the example above, the change 2 is a predecessor of the change 3 and the change 1 is a predecessor of the change 2 (and of change 3). So. it's not possible to transport change 3 without transporting of change 1 and change 2 (even if the changes contain different objects!).

 

4. HALM brings objects to the target system.

... even if a transport failed because of activation errors. Yes, it's true. The HALM executes transports (as well as pure imports) with the special activation mode (=4) resulting that even broken objects are committed. So, you should always be able to find your objects in the target system (either successfully activated or broken).

 

5. In the Change Recording enabled system HALM always creates a change as a result of a transport (or import).

And automatically releases it (since SP8), even if activation errors happened. That is why probably it makes no sense to enable the Change Recording in a target system.


Software Provisioning Manager Survial Guide

$
0
0

The following tries to answer the most burning questions about software provisioning manager 1.0 in case you are not so familar with the tool.

 

What is it ?


Software provisioning manager is part of the SL Toolset. It uses the sapinst framework to offer all kinds of software logistic procedures like installation, uninstallation, system copy, system rename, etc. for SAP products. The chosen procedure is executed by the sapinst framework. The steering logic is stored in so-called Control files. The control file itself is divided into so-called Steps . Each Step contains an atomic action which can be redone as often as necessary. In addtion to this the installer keeps a log of all processed steps. Due to this it is possible to restart the software provisioning manager after an error without starting the whole procedure from scratch.

 

sapinst_flow_logic.PNG

After a procedure from the product Catalog was chosen the installation directory is created and the control files are copied into it. Even if the installer is aborted during the procedure it is possible to continue the procedure as long as the installation is using the content from the installation directory. For that the installer uses the last step which was executed.

 

Where can I download it ?


It is not delivered directly with a product. Instead you can download it from http://service.sap.com/sltoolset

Where will it be executed ?


In case you are using a SAP HANA database as backend, the software provisioning manager is always executed where the SAP system is to be installed.

In special cases like a migration towards SAP HANA you might run the installer on a special host like HANA standby node (refer to http://scn.sap.com/docs/DOC-47657, page 13) but for standard scenarios like installation, uninstallation, system copy and system rename you start the software provisioning manager on an SAP application server.

 

What are the prerequisites to start it ?


After extracting the SAR archive you have to logon as a root user on Unix/linux or a user which is in the local Administrator group on Windows.

On Unix/linux you must be able to start an X Window from the shell you want to use. If the command xclock is working you should be fine.

Due to this you need to logon directly with user root as a userswitch in the shell might disable the possibility to start an X Window.

In case you have problems setting up the X Window you can use the remote gui of the software provisioning manager instead.

For this you have to download the software provisioning manager for your Windows platform.

After extracting it you can start the executable sapinstgui.exe. You then have to start the procedure on the server with the option -nogui. For example:

 

IM_LINUX_X86_64/sapinst -nogui

...

guiengine: No GUI server connected; waiting for a connection on host plx101, port 21212 to continue with the installation

You can use the host plx101 and port 21212 to set it in the local gui.

 

remote_gui.png

 

Where do the logfiles go ?

 

In case you are starting a new installation there are two possibilities.

  1. Start the executable sapinst directly in the folder you have extracted by executing ./sapinst or via doubleclick on Windows.
    Choose the procedure you want to execute. The installer will then create a folder structure which refelcts the product you have chosen.
    start_swpm_on_dvd.PNG
    In case of a restart you always have specify the product again. Otherwise the software provisioning manager will not find the installation directory.
  2. Create your own directory and start the software provisioning manager from the directory.
    start_swpm_specified_path.PNG
    When you are using a specified path the directory structure can be much simpler. In case of a restart you don't have to specify the installation procedure again. The installer will recognize the current installation and continues right away. In case you want to know where the location of the installer is you can refer to file start_dir.cd.

 

What logfiles should I check ?


Inside of the installation directory there are several types of files and directories:

  • Control files like control.xml, control.dtd, keydb.xml. These files steer the installation (what to do, what files to create, etc...),
  • Executables like migmon.jar, folder sapjvm which contains a Java virtual machine,
  • Logfiles, these files traces the installation procedures. Sorting them by date should get you a good starting point.
    The main file to check is sapinst_dev.log . In case of SAP HANA the second file to check is HdbCmdOut.log

 

What kind of errors can occur  ?

 

Problems with the software provisioning manager can be divided in categories:

  • Coding errors like syntax errors.
  • Procedure errors. For example, an executable is called before it is even installed.
  • And tool errors. For example an executable like R3trans fails with an error.

 

The first two error categories are the most critical because it will be hard to fix them without the responsible developer on hand.

Analyzing a tool error is not so difficult. For this you have to keep in mind that the software provisioning manager calls external executables like you would call them from the commandline. That means in most cases you will be able to reproduce the error by executing the executable (e.g. R3trans) the same way the software provisiong manager does. The first step for executing the tool is to know how it is called. For this in most cases the tool gets it own logfile. In addition the file sapinst_dev.log contains the complete call. Most of the SAP executables are executed as user sidadm. For this the installer executes a userswitch. So in in this case you want to execute the tool the same way, as user sidadm. The installer uses the same environment as you get when you logon as user sidadm.

 

How can I skip or repeat a step ?

 

In case an error occurs in a step during a procedure the software provisioning will always try to continue with this step. In case there is a special occasion where you want to skip a step you can refer to oss-note: 1805234 Keep in mind that skipping s step might cause trouble in further steps as steps rely on further steps and expect them as proceeded succesfully.

 

Problems during import or export phase

 

The export and import phase is the most critical phase during an installation or a system copy as it is one of the most complex phases.

A detailed description about the architecture of an export and import can be found here: Migration to SAP HANA, analyzing problems.

System Copy and Migration Observations

$
0
0

There are many blogs and documents available describing how to best migrate your SAP system to HANA. This isn't one of those.

 

What this is, on the other hand, is a few observations, and some lessons learned, when migrating an ERP system to new hardware using the R3load, aka Export/Import, method of system copy. The overall process is well-described in the official System Copy Guide and in numerous documents available on SCN, so I won't go into that detail here. What is not well-described, however, is how to go about choosing some of the parameters to be used during the export and import -- specifically, the number of parallel processes. First, however, let's address some background confusion prevalent among many customers.

 

 

Homogeneous or Heterogeneous?

One point that seems to come up, time and time again, in questions posted to SCN is about whether a homogeneous system copy is allowed in the case of a database or operating system upgrade.

 

The answer is yes.

 

If you are upgrading your operating system, for instance from Windows Server 2003 to Windows Server 2012 R2, you are not changing your operating system platform. Therefore, this remains a homogeneous system copy (yes, you should be using system copy as part of a Windows operating system upgrade, as an in-place upgrade of the OS is not supported by either Microsoft nor SAP if any non-Microsoft application (i.e., your SAP system) is installed, except in special circumstances which generally do not include production systems).

 

If you are upgrading your database platform, for instance from SQL Server 2005 to SQL Server 2012, you are not changing your database platform, and so, again, this is a homogeneous system copy. It is possible and acceptable to upgrade SQL Server in place, although you might consider following the same advice given for a Windows OS upgrade: export your SAP system (or take a backup of the database), then do a clean, fresh install of the OS and/or DBMS and use SWPM to re-import your database while reinstalling SAP.

 

You are only conducting a heterogeneous system copy if you are changing your operating system, database platform, or both, i.e. from Unix to Windows or Oracle to SQL Server. Or migrating to HANA.

 

  • Homogeneous: source and target platforms are the same (although perhaps on different releases).
  • Heterogeneous: source and target platforms are different.

 

Export/Import or Backup/Restore?

The next question that often arises is whether an Export/Import-based migration or Backup/Restore-based copy is preferred. These methods sometimes go by different names:

 

Export/Import is sometimes called R3load/Migration Monitor based or Database Independent (in the System Copy Guide). Because this method is not reliant on database-specific tools, it is the only method that can be used for heterogeneous copies. However, it can also be used for homogeneous copies.

 

Backup/Restore is sometimes called Detach/Attach, or Database Dependent (in the Guide), or even just Homogeneous System Copy (in the SWPM tool itself). This method relies heavily on database-specific tools and methods, and therefore it can only be used for homogeneous copies.

 

If you are performing a heterogeneous system copy, then you have no choice. You must use the Export/Import method. If you are performing a homogeneous system copy, you may choose either method, but there are some definite criteria you should consider in making that choice.

 

Generally speaking, for a homogeneous system copy, your life will be simpler (and the whole procedure may go faster) if you choose the Backup/Restore method. For a SQL Server-based ABAP system, for instance, you can make an online backup of your source database without having to shut down the SAP system, which means there is no downtime of the source system involved. Copy the backup file to your target system, restore it to a new database there, then run SWPM to complete the copy/install. This is great when cloning a system for test purposes. Of course, if the goal is to migrate the existing system to new hardware, then downtime is inevitable, and you certainly don't want changes made to the source system after the backup.

 

The Detach/Attach variant of this method is probably the fastest overall, as there is no export, import, backup, or restore to be performed. However, downtime is involved. You shut down the source SAP system, then use database tools (SQL Server Management Studio, for instance), to detach the database. Then you simply copy the database files to your target system, use database tools again to attach the database, then run SWPM on the target to complete the copy/install.

 

By comparison, the Export/Import method involves shutting down the source SAP system, then using SWPM to export the data to create an export image (which will likely be hundreds of files, but will also be considerably smaller than your original database), then using SWPM again on the target system to install SAP with the export image as a source. Lots of downtime on the source, and generally speaking a more complex process, but much less data to move across the network.

 

Obviously I am a big fan of using the Backup/Restore or Detach/Attach database-dependent method for homogeneous system copies, and in most cases, this is what I would advise you to choose.

 

When You Should Choose Export/Import

There is one glaring disadvantage to the Backup/Restore method, however. This method will make an exact copy of your database on your target system, warts and all. Most of the time, that isn't really an issue, but there are circumstances where you might really wish to reformat the structure of your database to take advantage of options that may not have been available when you originally installed your SAP system, or perhaps to make up for poor choices at the time of original install that you would now like to correct. Well, this is your big opportunity.

 

What are some of these new options?

  • Perhaps you are migrating to new hardware, with many more CPU cores than available on the old hardware, and you see this as a prime opportunity to expand your database across a larger number of files, redistributing the tables and indexes across these files, thus optimizing the I/O load. Backup/Restore will create a target database with the same number of files as the source, with the tables distributed exactly as they were before. You can add more files, but your tables will not be evenly redistributed across them. Export/Import, on the other hand, doesn't care about your original file layout, and gives the opportunity to choose an entirely new file layout during the import phase.
  • Perhaps you are upgrading your DBMS and would like to take advantage of new database compression options. Yes, you can run MSSCOMPRESS online after upgrading to a platform that supports it, but this can have long runtimes. SWPM will, however, automatically compress your database using the new defaults during the import, assuming your target DBMS supports these defaults, so you can achieve migration and compression in a single step. Compression does not add any extra time to the import.

 

Parallel Processing During Export and Import

At the beginning of the export and the import in the SWPM tool, there is a screen where you are asked to provide a Number of Parallel Jobs. The default number is 3. This parameter controls how many table packages can be simultaneously exported or imported, and obviously it can have a huge impact on overall runtime. The System Copy Guide does not give much in the way of advice about choosing an appropriate number, and other documentation is sparse on this topic. Searching around SCN will bring up some old discussion threads in which advice is given ranging from choosing 1 to 3 jobs per CPU, and so forth, but it is difficult to find any empirical data to back up this advice.

 

This is an area needing more experimentation, but I can share with you my own recent experience with this parameter.

 

Export on Old Hardware

I exported from two different QAS machines, both using essentially identical hardware: HP ProLiant DL385 Gen1 servers, each with two AMD Opteron 280 2.4 GHz Dual-Core CPUs (a total of 4 cores, no hyperthreading) and 5 GB of RAM, running Windows Server 2003 and SQL Server 2005. I think you can see why I wanted to get off these machines. The application is ERP 6.04 / NetWeaver 7.01 ABAP. The databases were spread across six drive volumes.

 

Export 1: 3 Parallel Processes on 4 Cores

The first export involved a 490 GB database, which SWPM split into 135 packages. I hadn't yet figured out what I could get away with in terms of modifying the number of export jobs involved, so I left the parameter at the default of 3. The export took 8 hours 25 minutes. However, the export package at the end was only 50.4 GB in size.

 

Export 2: 6 Parallel Processes on 4 Cores

By the time I got around to the second export I had learned a thing or two about configuring these jobs. This time the source database was 520 GB, and SWPM split it into 141 packages. I configured the export to use 6 processes. During the export I noted that CPU utilization was consistently 90-93%, so this was probably the maximum the system would handle. This time the export took 6 hours 28 minutes, a two-hour reduction. As most of the time was spent exporting a single very large table in a single process, thus not benefiting at all from parallelization, I probably could have reduced this time considerably more using advanced splitting options. The resulting export package was 57.6 GB in size.

 

Import on New Hardware

The target machines were not identical to each other, but in both cases the target OS/DBMS was Windows Server 2012 R2 and SQL Server 2012. Both databases would be spread across eight drive volumes instead of the previous six.

 

Import 1: 3, then 12, then 18 Parallel Processes on 12 Cores

The target of my first export, and thus first import, was an HP ProLiant BL460c Gen8 with two Intel Xeon E5-2630 v2 2.6 GHz six-core CPUs with hyperthreading and 64 GB of RAM. Yeah, now we're talking, baby! Twelve cores, twenty-four logical processors, in a device barely bigger than my laptop.

 

At the start of this import, I still didn't really have a handle on how to configure the parallel jobs, so as with the matching export, I left it at the default of 3. After all the DEV system I had migrated earlier didn't take that long -- but the DEV system had a considerably smaller database.

 

Five hours into the import I realized only 60 of the 135 packages had completed, and some quick table napkin calculations indicated this job wasn't going to be finished before Monday morning when users were expecting to have a system. I did some research and some digging and figured it would be safe to configure one import job per core. However, I really didn't want to start all over from scratch and waste the five hours already spent, so with a little more experimentation I found a way to modify the number of running jobs while the import was in process, with immediate effect. More on this in a bit.

 

So first I bumped the number of parallel jobs from 3 to 12, and immediately I saw that the future was rosier. I monitored resource usage for a while to gauge the impact, and I saw CPU utilization bouncing between 35% to 45% and memory utilization pegged at 46%. Not bad, it looked like we still had plenty of headroom, so I again bumped up the processes, from 12 to 18. The overall import job took another impressive leap forward in speed, while CPU utilization only rose 2-3% more and memory utilization didn't change. It's entirely possible this machine could have easily handled many more processes, but I had seen an anecdotal recommendation that the parallel processes should be capped at 20 (I'm not sure why, but there is some indication that much beyond this number and the overall process may actually go slower -- but again, that may only be true for older hardware), and in any case all but one import package finished within minutes after making this change.

 

The final package took an additional three hours to import by itself. This was PPOIX, by far the largest table in my database at 170 GB (I have since talked to Payroll Accounting about some housecleaning measures they can incorporate), and thus without using table splitting options this becomes the critical path, the limiting factor in runtime. Still, I had gained some invaluable experience in optimizing my imports.

 

My new database, which had been 490 GB before export, was now 125 GB after import.

 

Import 2: 12 Parallel Processes on 8 Cores

The target of my second export, and thus second import, was also an HP ProLiant BL460c, but an older Gen6 with two Intel Xeon 5550 2.67 GHz quad-core CPUs with hyperthreading and 48 GB of RAM. Maybe not quite as impressive as the other machine, but still nice with eight cores, sixteen logical processors.

 

Based upon my experience running 18 processes on 12 cores, a 1.5:1 ratio, I started this import with 12 processes. I noted CPU utilization at 60-75% and memory utilization at 49%. Still some decent headroom, but I left it alone and let it run with the 12 processes. Despite seemingly matched CPU frequencies, the Gen6 really is not quite as fast as the Gen8, core for core, due to a number of factors that are not really the focus of this blog, and to this I attributed the higher CPU utilization with fewer processes.

 

This time, 140 of my 141 packages were completed in 2 hours 4 minutes. Again, PPOIX consumed a single import process for 6-1/2 hours by itself, in parallel with the rest of the import, and thus the overall import time was 6 hours 32 minutes. Next time I do this in a test system, I really will investigate table splitting across multiple packages, which conceivably could get the import time down to not much more than two, perhaps two and a half hours, or perhaps even much less should I be willing to bump up the process:core ratio to 2:1 or even 3:1.

 

The source database, 520 GB before export, became 135 GB after import on the target. Yeah, I'm quite liking this compression business.

 

Max Degree of Parallelism

In addition to adjusting the number of parallel jobs, I temporarily set the SQL Server parameter Max Degree of Parallelism (also known as MAXDOP) to 4. Normally it is recommended to keep MAXDOP at 1, unless you have a very large system, but as explained in Note 1054852 (Recommendations for migrations using Microsoft SQL Server), the import can benefit during the phase where secondary indexes are built with a higher level of parallelism. Just remember to set this back to 1 again when the import is complete and before starting regular operation of the new system.

 

Minimal Logging During Import

The other important factor for SQL Server-based imports is to temporarily set trace flag 610. This enables the minimal logging extensions for bulk load and can help avoid situations where even in Simple recovery mode the transaction log may be filled. For more details see Note 1241751 (SQL Server minimal logging extensions). Again, remember to remove the trace flag after the import is complete.

 

Adjusting Parallel Processes During Import

During Import 1 I mentioned that I adjusted the number of processes used from 3 to 12 and then to 18 without interrupting the import. How did I do that? There is a configuration file that SWPM creates using the parameters you enter at the beginning called import_monitor_cmd.properties. The file can be found at C:\Program Files\sapinst_instdir\<software variant>\<release>\LM\COPY\MSS\SYSTEM\CENTRAL\AS-ABAP (your path may be slightly different depending upon options you chose, but it should be fairly obvious). Within the properties file you will find the parameter jobNum. Simply edit this number and save the file. The change takes effect immediately.

 

Conclusions

How many parallel processes to choose is not a cut-and-dried formula. Generally, it seems that a ratio of processes to cores between 1.5:1 and 3:1 should be safe, but this will depend on the speed and performance of your CPU cores and general system hardware. On the Gen1 processors, 1.5:1 pegged them to over 90% utilization. On the Gen8 processors, 1.5:1 didn't even break 50%, while the Gen6 fell somewhere in between. The only way to know is to test and observe on representative hardware.

 

There is also a memory footprint for each parallel process, but with anything resembling modern hardware it is far more likely you will be constrained by the number of CPU cores and not the gigabytes of RAM. Still, a number I have seen mentioned is no more than 1 process per 1/2 GB of RAM.

 

I have seen a suggestion of a maximum of 20 processes, but the reasons for this suggestion are not clear to me, and I suspect this number could be higher with current hardware.

 

If you have one or more tables of significant size, it is worthwhile to use the package splitter tool (part of SWPM) to break them up into multiple packages so that they can benefit from parallelization.

 

Thanks for following along, and hopefully you will find the above useful. If you have your own experiences and observations to add, please do so in the comments.

SUM, SPAM/SAINT and the story about Support-Package Levels

$
0
0

You have heard (or read in SAP Note 2039311) that SUM does no longer allow "manually" increasing the Support Package (SP) - Level for central components like SAP_BASIS, SAP_ABA, SAP_APPL, SAP_HR and SAP_BW during a SUM run. This blog shall explain the reason for this.

 

Situation

Products with several software components inhere complex dependencies that have to be considered for a maintenance activity (applying SPs, SP Stacks, implementing enhancement packages, or upgrades). The Maintenance Optimizer is the central tool in SAP Solution Manager to plan the maintenance, considering the dependencies, and it offers only valid SP-combinations. The result is the stack.xml as a kind of recipe for the Software Update Manager (SUM) to apply the changes on the system.

Until SUM 1.0 SP12, it was possible to increase the SP-level for software components on a SUM dialog during the BIND_PATCH phase, and thus kind of overruling parts of the stack.xml. Now with SUM 1.0 SP13, this option is no longer possible for central components as listed in SAP Note 2039311 (central note for SUM 1.0 SP13).

 

SUM or SPAM/SAINT

For applying only few SPs, it was possible to use either SUM or SPAM/SAINT. SAP note 1803986 provides a tool comparison with hints when to use which tool. With SAP NetWeaver 7.4, there are some dedicated SP-Stacks that only SUM can apply on the system (see note 1803986). This applies to SAP NetWeaver 7.40 Support Package 05 (SR1) und Support Package 08 (SR2). The reason is that these SPs include changes of the DDIC tools (not only DDIC content) that only SUM can consider, but not SPAM/SAINT. This is accompanied by a new kernel version, so some software components are bound to specific kernel versions.

 

SUM being stricter now

SUM checks the kernel requirement provided by the stack.xml, calculates other dependencies and prepares internal buffers. Later, the SUM offers the dialog to adapt the SP levels for software components. If you would now increase the SP level for the central components, you may end up in a situation that this software component SP-level requires a newer kernel, and several SUM internal calculations are invalidated.

 

So how do I …

The Maintenance Optimizer remains the central point, and you will have to plan your maintenance activities with the desired SP-level for the central components from the start.

 

Good news

SUM 1.0 SP13 patch level 2 will allow adapting the SP-levels for software component SAP_HR again, as this seams the biggest hurdle. The other central components will not change. Other components than the listed ones are not affected.

 

Boris Rubarth

Product Management Software Logistics, SAP SE

DMO: background on table split mechanism

$
0
0

This blog explains the technical background of table split as part of the database migration option (DMO).

As a prerequisite, you should have read the introductionary document about DMO: Database Migration Option (DMO) of SUM - Introduction and the technical background in DMO: technical background.

 

During the migration of application tables, the migration of big tables might dominate the overall runtime. That is why SAPup considers table splitting to reduce the downtime of the DMO procedure. Table splitting shall prevent the case that all tables but a few have been migrated, and that only a small portion of all R3load processes these remaining (big) tables processes. The other R3load processes would be idle (to be more precise: would not run), and the long tail processing of the big tables would increase the downtime unnecessarily. See figure 1 below for a schematic view.

 

Fig_01.jpg

SAPup uses the following approach to define the tail: if the overall usage of R3load pairs drops below 90 %, SAPup handles all tables that are processed afterwards as being part of the tail (see figure 2 below).

 

Fig_02.jpg

During the configuration of the DMO procedure, you will configure a number of R3load processes, which determines the number of R3loads that may run in parallel. This explanation will talk about R3load pairs that are either active of idle, which is rather a virtual view. If an R3load pair did execute one job, it will not wait in status idle, but end. SAPup may then start another R3load pair. Still for the discussion of table split, we consider a fixed number of (potential) R3load pairs, which are either active or idle. The following figure 3 illustrates this view.

 

Fig_03.jpg

Prerequisites

To follow this blog, you have to be familiar with the basics of DMO, and with the DMO R3load mechanism, as discussed in the SCN blogs Database Migration Option (DMO) of SUM – Introduction and DMO technical background.

 

Automatic table splitting

SAPup will automatically determine the table split conditions, and there is no need and no recommendation to influence the table splitting. Your task is to find the optimal number of R3load processes during a test run, and provide the table duration files for the next run. (SAPup will use the table duration files to calculate the table splitting based on the real migration duration instead of the table size; see DMO guide, section 2.2 “Performance Optimization: Table Migration Durations”).

You may still want to learn more on the split logic, so this blog introduces some background on table splitting. Note that SAPup will not use R3ta for the table split.

 

Table split considerations

Typically, you will expect table splitting to happen for big tables only, but as we will see, the attempt to optimize the usage of all available (configured) R3load processes may result in splitting other tables as well. Still, splitting a table into too many pieces may result in a bad export performance: lots of parallel fragmented table segments will decrease read performance, and increase the load on the database server. A table may be big, but as long as has been completely processed before the tail processing starts, there is no reason to split that table. That is why the tool will calculate a minimum of table splits to balance all requirements.

The logic comprises four steps: table size determination, table sequence shuffling, table split determination, and assignment to buckets. A detailed explanation of the steps will follow below. During the migration execution, SAPup will organize tables and table segments in buckets, which are a kind of work packages for the R3load pair to export and import. During the migration phase, each R3load pair will typically work on several buckets, one after the other.

 

Step 1: Sorting by table size

SAPup will determine the individual table sizes, and then sort all tables descending by size.

In case you provide the table duration file from a previous run in the download folder, SAPup will use the table migration duration instead of the table size.

 

Fig_04.jpg

Assuming we only had sixteen tables, figure 4 above shows the sorted table list. The table number shall indicate the respective initial positioning in the table list.

 

Step 2: Shuffle table sequence

Migrating the tables in sequence of their size is not optimal, so the table sequence is reordered (“shuffled”) to achieve a good mixture of bigger and smaller application tables. Figure 5 below tries to illustrate an example.

 

Fig_05.jpg

SAPup uses an internal algorithm to shuffle the table sequence, so that table sizes alternate between bigger and smaller.

 

Step 3: Table split determination

SAPup will now simulate table splitting, based on the number of configured R3load processes. Note that changing the number of configured R3load processes later during the migration phase will affect the runtime of the procedure.

For the simulation, SAPup will work on “slots” that represent the R3load pairs, and will distribute the tables from the shuffled table list into these slots. Note that these R3load “slots” are not identical to the buckets. SAPup will use buckets only at a later step. A slot is a kind of sum of all buckets, which are processed by an R3load pair.

Initially, the simulation will assign one table from the shuffled table list into each slot until all slots are filled with one table. In an example with only eight R3load pairs, this means that after the first eight tables, all slots have a table assigned, as shown in figure 6 below.

 

Fig_06.jpg

In our example, SAPup has filled all slots with one table, and the second slot from above has the smallest table, so it has the fewest degree of filling.

Now for all following assignments, SAPup will always assign the next table from the list to the slot that has the fewest degree of filling. In our example, SAPup would assign the next table (T7) to the second slot from top. After that, SAPup will most probably assign the next table (T9) to the first slot, see figure 7 below (sounds like Tetris, doesn’t it?).

 

Fig_07.jpg

Finally, SAPup has assigned all tables from the shuffled table list to the slots, as shown in figure 8 below. Note that the figures are not precise in reflecting the table sizes introduced in figure 4 and 5.

 

Fig_08.jpg

As last part of this simulation run, SAPup will now determine which tables to split. The goal is to avoid a long tail, so SAPup will determine the tail, and split all tables that are part of the tail.

SAPup determines the tail by the following calculation: SAPup sorts the slots by filling degree, and the tail begins at the point in time at which the usage of all R3load pairs is below 90%. All tables that are part of the tail – either completely or partially – are candidates for a split, as shown in figure 9 below. As an example, table T2 is shown as being part of the tail.

 

Fig_09.jpg

SAPup determines the number of segments into which the table will be split by the degree by which the table belongs to the tail. The portion of the table that does not belong to the tail is the scale for the table segments to be created. For the example of table T2, this may result in three segments T2/1, T2/2, and T2/3.

SAPup will now extend the shuffled table list by replacing the detected tables by its table segments. Figure 10 shows the example with three segments for table T2.

 

Fig_10.jpg

SAPup starts the next iteration of the simulation, based on the shuffled table list with table segments.

If the calculated tail is negligible (lower than a specific threshold) or if the third simulation has finished, SAPup will continue with step 4.

 

Step 4: Table and table segments assignment to buckets

The result of step 3 is a list of tables and table segments which sequence does not correlated to the table size, and which was optimized to fill all R3load slots with a small tail. Now SAPup will work with buckets (work packages for R3load pairs) instead of slots. This is a slightly different approach, but as the filling of the buckets will use the same sequence of tables before, the assumption is that it has the same result.

 

SAPup will assign the tables of this list to the buckets in the sequence of the list. The rules for this assignment are

  1. A bucket will get another table or table segment assigned from the list as long as the bucket size is lower than 10 GB.
  2. If the next table or table segment is bigger than 10 GB, the current bucket is closed, and SAPup will assign the table or table segment to next bucket.
  3. SAPup will put segments of a split table into different buckets– otherwise two table segments would reside in one bucket, which neutralizes the desired table split.

The first rule results in the effect that a bucket may have more table content than 10 GB. If a table of e.g. 30 GB was not determined for a split, the respective bucket will have this size.The second rule may result in the effect that a bucket is only filled to a low degree, if the following table / table segment was bigger than 10 GB so that it was put into the following bucket.The third rule results in the effect that e.g. for a table with four segments of size 5 GB each, several buckets will have a size of 5 GB. Figure 11 below tries to illustrate this with some examples.

 

Fig_11.jpg

Now SAPup has defined the distribution of tables and table segments into buckets, which in turn are part of a bucket list.All this happens during the phase EU_CLONE_MIG_DT_PRP for the application tables (and during phase EU_CLONE_MIG_UT_PRP for the repository). Note that the DT or UT part of the phase name is no indication whether or not the phase runs in uptime (UT) or downtime (DT): EU_CLONE_MIG_DT_PRP runs in uptime.The migration of application tables happens in downtime during phase EU_CLONE_MIG_DT_RUN. During the migration phase, SAPup will start the R3load pairs and assign the next bucket from the bucket list. As soon as an R3load pair is finished (and closes), SAPup will start another R3load pair and assign the next bucket to this pair, as shown in the following figure 12.

 

Fig_12.jpg

Relevant log files are

  • EUMIGRATEDTPRP.LOG: tables to split, number of buckets, total size
  • EUMIGRATEDTRUN.LOG: summary of migration rate
  • MIGRATE_DT_RUN.LOG: details like R3load logs

 

Additional considerations

Typically, each R3load pair will execute more than one bucket. Exceptions may happen for small database sizes. As an example, for a total database size of 9992.3 MB and 20 R3load pairs (so 40 configured R3load processes), the tool would reduce the bucket size to put equal load to all R3load pairs. The log will contain a line such as “Decreasing bucket size from 10240 to 256 MB to make use of 160 processes”. Below you see the respective log entry in EUMIGRATEUTPRP.LOG:

 

1 ETQ399 Total size of tables/views is 9992.3 MB.

2 ETQ399 Decreasing bucket size from 10240 to 256 MB to make use of 20 processes.

1 ETQ000 ==================================================

1 ETQ399 Sorting 10801 tasks for descending sizes.

1 ETQ000 ==================================================

1 ETQ399 Distributing into 20 groups of size 500 MB and reshuffling tasks.

1 ETQ000 ==================================================

CTS+ or HTA?

$
0
0

You might have noticed that there are now two options to transport SAP HANA objects via ABAP: SAP HANA transport for ABAP and the enhanced Change and Transport System (CTS+).

 

If you do not know about these options, yet, please refer to the following documentation and presentations:

 

After having gone through these options, you might now ask yourself: Can I use SAP HANA transport for ABAP to transport my SAP HANA objects via CTS?

The answer is: yes, you can – but you should only do so for a special use case.

 

At first, please think about your SAP HANA systems. How are they set up?

Do you use SAP HANA systems in a stand-alone set-up? Then you should use CTS+ for transporting your SAP HANA objects (or native transports via SAP HANA application lifecycle management).

Details about these options are provided in here: http://help.sap.com/saphelp_hanaplatform/helpdata/en/88/f1de06b2be4239b71e3aed03e1a617/frameset

 

Do you use ABAP systems with an SAP HANA database as primary database? Then you should use the SAP HANA transport for ABAP.

 

But let’s have a closer look at the different types of SAP HANA applications that might exist on your systems:

  • Do you develop SAP HANA applications which are closely related to ABAP development objects or rely on them? Then SAP HANA transport for ABAP is the right choice. You can have the ABAP and SAP HANA objects in one transport request. You can assign the SAP HANA packages to the ABAP package that you already use for your ABAP development. Transaction SCTS_HTA offers both, the synchronization (and, with this, the transport) of complete packages or of individual (changed) objects.
  • Do you develop native SAP HANA applications (using the SAP HANA repository) which do not have any relation to tables or views that exist on the ABAP side, but nevertheless run on the SAP HANA Database of and existing ABAP system? Then you have two options: you can use CTS+ or SAP HANA transport for ABAP. Both options are valid:
    • CTS+ is a good option, if you already work with CTS+ for other applications or come from a single SAP HANA system and now consolidate your systems. The developers working on the SAP HANA applications can continue working in the way they did in the past. They might only have to get used to a new SID. With CTS+, you can use change recording which is part of SAP HANA application lifecycle management. Each developer can work on his changelists.
      From a configuration perspective, you don’t need a separate transport track in transaction STMS. You can re-use the existing ABAP landscape and just add the parameters required for CTS+.
    • HTA is a good option if you started with ABAP development, then you moved on to some SAP HANA for ABAP applications and now you also want to create a native SAP HANA application. You can continue to use the transport mechanisms that you already know: SCTS_HTA for synchronizing the SAP HANA objects, SE09 for managing the transport requests. In this case, there is no need to configure change recording or CTS+. In fact, you should not enable change recording on the SAP HANA side (in SAP HANA application lifecycle management) if you want to use SAP HANA transport for ABAP to transport your modified SAP HANA objects. Keep in mind that with SAP HANA transport for ABAP, you can transport changed objects, but there is no way to find out who changed which object whereas in change recording in HALM, changelists are user-specific. In addition, with SAP HANA transport for ABAP, you always transport the active version of an object that is currently stored in the repository. If you used change recording and HALM, then you would transport the version of the object that is stored in the released changelist.

Decide on the option that suits you best and then stay with it and only use this transport option for your system landscape. Do not use several transport options for one development system. If you have to change the way you transport for one or the other reason, make sure that you do this in a safe way. This means that, if you move away from using change recording in SAP HANA application lifecycle management (HALM), always close all open changelists and transport them (via CTS+ or in native mode – whatever you were using). If you decide to stop using SAP HANA transport for ABAP, make sure that all objects are synchronized. In any case – for any switch of the transport mode – make sure that all transport requests are released and imported into all systems of your landscape.

Viewing all 112 articles
Browse latest View live