5 Basic Step to Successful SAP HANA Migration
SAP HANA Basic Structure
SAP HANA Multitenant Database Architecture
Choosing the Database Architecture on SAP HANA
Industrial Internet of Things (IIoT) Timeline
Smart Home of the Future
How to reset the selected output format for ALV
Difference Between 2k, 4k, 8k, 16k Logical Page Size in SAP ASE
Hello world!
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
SAP HANA Multitenant Database Architecture
SAP HANA Multitenant Database structure is basically a concept that allows multiple but isolated databases in one SAP HANA.
It comes with three major advantages:
- Lower the capital expenditure & TCO
- Much simpler database management
- Allows to build, deploy and run multitenant cloud applications
In order to utilize multiple container capabilities, SAP HANA system needs to be installed in multiple container mode, but it is also possible to convert single container systems to multiple container systems. From architectural point of view, SAP HANA multiple container system have one system database for central system administration and additional multitenant database containers called tenant databases.
Note that when an SAP HANA system installed in multiple container mode, it will have only one system ID (SID) and tenant databases are identified by a SID and database name.
Below figure would provide a better understanding around the overall architecture of SAP HANA Multiple Container Structure. System database provides centralized administration activities and can be used to create, drop, start, stop tenant databases and perform backup/recovery, system replication activities for all tenant databases at once.
As an SAP HANA system consists of multiple servers; name server, index server, preprocessor server, XS server etc. and the databases in a multiple-container system run different combinations of these servers. For instance, only the system database runs the name server which contains landscape information about the system, including which databases exist where. It also provides index server functionality for the system database. However, unlike the name server in a single-container system, it does not own the topology information (location of tables and table partitions in databases), and all the topology information is stored in the relevant tenant db catalog.
Tenant databases in this structure, require only an own index server. The compile server and pre-processor servers run on the system database and they can serve all tenant databases. SAP HANA XS server runs as embedded to the master index server of the tenant database by default, and it can also be added as a separate service if necessary. For now, multiple container systems are not supported by the SAP HANA XS advanced model.
From performance point of view all tenant databases in the same HANA system share the same system resources (memory and CPU Cores). It is possible the allocated system resources anytime depending on requirements from each specific tenant database. Tenant databases are self-contained and has its own database users, db catalog, repository, backups and logs. Also, this structure allows to run cross-database SELECT queries which is a great advantage when it comes to cross application reporting.
Kill All Processes With a Given Partial Name
Kill all processes that get by like below command;
ps aux | grep info_pattern
Use pkill -f, which matches the pattern for any part of the command line;
pkill -f info_pattern
Choosing the Database Architecture on SAP HANA
After the recent release of “more advanced” S/4 HANA products, the SAP community is now more focused on cloud, on premise or hybrid deployment options and it seems to me that the actual underlying SAP HANA database architecture is usually overlooked even though it is the core of the entire implementation. In case you end up with wrong HANA database architecture, it would be really hard to have a proper high-availability and disaster recovery setup in your landscape no matter where you deployed it – cloud or on premise. And remember, when it comes to architecting SAP HANA, there are 3 key elements must be considered carefully; scalability, effectiveness and fault tolerance. In this article, I aim to provide detailed information regarding the current available SAP HANA database architecture and deployment options.
There are six database deployment scenarios; three of them are perfectly fine for production use, other two options come with some restrictions in production and last one is available for non-production systems only.
1. Dedicated Deployment
This is the classical and most common scenario. Usually preferred for optimal (read: high) performance. As you can see from the below figure, there is one SAP system with one database schema created in one HANA DB running on one SAP HANA appliance.

2. Physical Server Partitioning
In this scenario, there is one storage and one HANA server which is physically split into fractions. Two separate operating systems installed on separate hardware partitions and hosting two separate HANA databases each have its own database schemas dedicated for respective SAP systems. There should not be any performance problems as long as you have a correct SAP HANA sizing specific for this purpose.

3. MDC (Multitenant Database Containers)
You can read the previously published article about the MDC concept here. Basically, there is one HANA database (and one system database container for administration purposes) and multiple tenant database containers can be spread on several hardware. Perfectly fine for production usage and I have to admit that this scenario is becoming my favourite due to it’s flexibility and scalability J More information can be found in SAP Note 2096000.

4. MCOD (Multiple Components in One Database)
The concept of having multiple applications in one database was available to SAP customers for more than 10 years, of course not with SAP HANA database back in then, but the technology was already there. This is basically multiple SAP systems/applications running on one SAP HANA database under different database schemas. Note that there are some restrictions in production usage especially when combining applications on SAP HANA on a single database explained in SAP Note 1661202 (white list of applications/scenarios) and SAP Note 1826100 (white list relevant when running SAP Business Suite on SAP HANA). These restrictions do not apply if each application is deployed on its own tenant database, but do apply to deployments inside a given tenant database.

5. Virtualized Deployment
Since SAP HANA SPS 05, SAP has been supporting virtualization on the HANA appliances. This scenario is based on VMware virtualization technology where separate OS images installed on one hardware, and each image is containing one HANA database hosting one database schema for each SAP system. Note that there are some restrictions to the hypervisor (including logical partitions). More information can be found on SAP Note 1788665, 2024433 and 2230704.

6. MCOS (Multiple Components in One System)
MCOS scenario allows you to have multiple SAP HANA databases on one SAP HANA appliance, e.g. SAP DEV and Test systems on one hardware. Production support for this scenario is restricted to SPS09 or higher due to the availability of some resource management parameters. SAP also does support running multiple SAP HANA DBMSs (SIDs) on a single production SAP HANA hardware installation. This is restricted to single host / scale-up scenarios only. I personally don’t recommend this scenario because it requires significant attention to various detailed tasks related to system administration and performance management. Since MCD is available, it would be a much better option in terms of supportability and scalability. More information about MCOS, check SAP Note 1681092.

Which one should be chosen?
For production environments, can be consider options 1 (Dedicated Deployment) and 3 (MDC) depending on a few factors including the existing infrastructure setup and capability, availability and performance requirements of the business, database sizes and structure, HA or DR setup requirements (if any), and overall SAP landscape size. The reason of consider MDC over Physical Server Partitioning is because it can achieve almost everything Physical Server Partitioning do, and comes with great flexibility in terms of supportability, scalability and re-architecture when needed.
Check of HANA System for Keep Healthy and Good Performance
What are the things that need to be done and controlled to keep the HANA system healthy and good performance?
One way to do that is using the SQL scripts provided by SAP in OSS note 1969700 – SQL Statement Collection for SAP HANA.
This OSS note provides a collection of very useful SQL scripts with which you can check a wide range of areas that are critical to a HANA system. In this blog, I will be describing how you can use these scripts to maintain good health of your HANA systems. I would suggest to perform these checks every 6 months.
Note: Some of these SQL scripts are version dependent, so you need to use the one which suits your version.
HANA_Configuration_HybridLOBActivation_CommandGenerator
Description: SAP introduced the concept of hybrid LOBs from SPS07 onwards. Upto SPS06 memory LOBs were used for storing large objects. The disadvantage of memory LOBs was that when any query accessed the columns containing memory LOBs, the entire column along with the LOBs had to be loaded into memory leading to higher memory usage. With hybrid LOBs, the LOB data is only loaded into memory when it’s size doesn’t exceed a certain configurable threshold (1000 bytes by default), otherwise it resides on disk.
Executing the SQL script “HANA_Configuration_HybridLOBActivation_CommandGenerator” gives you the table name and the column name which has memory LOBs and also the commands to convert them into hybrid LOBs.
Action: Open an SQL console and execute the “Alter table …” commands from the output of the SQL script. For more information, refer to OSS notes 1994962 – How-To: Activation of Hybrid LOBs in SAP HANA and 2220627 – FAQ: SAP HANA LOBs.
HANA_Consistency_TableLOBLocation
Description: This script checks for tables and hybrid LOB containers that are located on different hosts. If they are located on different hosts, it has a significant performance impact as it will increase the network data transfer time, leading to increased time in query execution accessing the LOBs.
From the output of the script you can see the table name, the host in which the table is located and the host in which the LOB containers are located.
Action: You can use this SQL statement to relocate them to the same host.- alter table “<schema_name>”.”<table_name>” move to ‘<target_host>:<target port>’ PHYSICAL
Note that the option “physical” is important here. Without the physical option the table data will be moved to the target host excluding the LOB data. Using physical option moves the LOB data as well.
HANA_Disks_Overview
Description: This script gives you the fragmentation information of the data and log files. Usually it’s the data files that gets fragmented over the time so by default the script considers data files only for reporting. In the modification section you can update the PORT as ‘%03” so that only indexserver data files are reported. This is to avoid reporting of data files of other services like scriptserver and xsengine as those are very small in size and defragmenting those will not yield any positive results and is not desired.
Check the column “Fragmentation_Pct”. If the fragmentation percentage is considerable, say more than 40 or 50% then it’s worth doing defragmentation of that data file.
Action: You can execute defragmentation of the data files using the command – Alter system reclaim datavolume ‘<host>:<port>’ 120 defragment
You must take some precautionary measures if you have system replication enabled. You need to ensure that no replication snapshots exist in the system when you are running defragmentation otherwise the defragmentation will fail with the error message “general error: Shrink canceled, probably because of snapshot pages SQLSTATE: HY000”. Refer to OSS note 2332284 – Data volume reclaim failed because of snapshot pages and change the parameters as suggested then perform the defragmentation.
HANA_IO_Commits
Description: You can use this script to check the avg. commit time of the data files for all nodes of your environment.
Action: From the example above, you can see that the avg. commit time for node 13 is too high. With almost the same number of commits, the avg. commit time of node 11 is much less. The main reason is that the I/O time for the indexserver data file of node 13 is higher compared to other nodes hence increasing the commit time. In such scenarios you need to contact your OS team and ask them to check for I/O issues in the persistent layer. Refer to OSS note 1999930 – FAQ: SAP HANA I/O Analysis for more info on I/O analysis.
Also if you have system replication enabled, low thoughput in the system replication network between the primary servers and the secondary servers can increase the commit time on primary significantly, especially if you are using “logreplay” operation mode. You can check the throughput by executing the command
hdbcons -e hdbindexserver "replication info"
at the OS level of the primary server. Look for the value of “shippedLogBufferThroughput”. If the throughput is too low you should contact your OS team and ask them to check the servers on the secondary site. Sometimes hardware issues on the secondary site can decrease the throughput.
The benchmark for avg. commit time is < 10 ms.
HANA_IO_Savepoints
Description: You can run this script by updating the start and end time as per your requirement in the modification section. Also at the bottom of the script I prefer to use the column MAX_BLK_S in the order by clause. This will give you the savepoint statistics of all the nodes within the chosen time period sorted by the max blocking period. You will get a result like below.
The column MAX_BLK_S gives you the blocking period of the savepoints and the column MAX_CRIT_S shows the critical phase duration. From the example above, you can see the blocking phase duration of some of the savepoints is high but the critical phase duration is low.
Savepoint phases can be categorized as below in the order of their occurrence –
Phase 1: Pageflush
Phase 2: Critical phase
Phase 2.1: Blocking phase
Phase 2.2: Critical phase
Phase 3: Post critical
Phase 2 is the most important phase which has severe impact on system performance if that phase takes longer time to complete. During the blocking phase, the system waits for acquiring lock. Once it acquires the lock it then enters the critical phase to write the data into persistence. If either the blocking or the critical phase is too high, it will impact the system performance because no DML operations are permitted in the system during this phase. This manifests in the form of long running sessions and long running threads. All threads performing DML operations like insert, update, delete will get blocked until the lock is released by the savepoint blocking phase. This will lead to severe system slowness.
Action: The blocking phase duration can be high because of bugs in the HANA revisions or some other reasons as well. Check OSS note 2100009 – FAQ: SAP HANA Savepoints, section 6 “How can typical savepoint issues be analyzed and resolved?” and 1999998 – FAQ: SAP HANA Lock Analysis, section 6 “How can internal lock waits be analyzed?”
I have also observed that if you have low throughout in the system replication network between primary and secondary systems, the savepoint blocking phase duration can increase significantly when a full initial load is going on between primary and secondary and also at other times as well. Refer to the section “HANA I/O Commits” above to see how to measure the system replication throughput. So this is another area that you need to check if you are observing high savepoint blocking phase duration very frequently.
If the critical phase duration is high, it is most likely due to I/O issues at the persistence layer. You can check 1999930 – FAQ: SAP HANA I/O Analysis for likely causes and solution. Also you can run the SQL script HANA_Configuration_MiniChecks and refer to the value of savepoint write throughput. It should be > 100 mb/s at least. If this this lesser than 100 mb/s then you should contact your OS/hardware team.
Note that these things change a lot with new HANA revisions so you should watch the latest version of the above-mentioned OSS notes to get updated information. For e.g. consistent change lock is not acquired anymore during the blocking phase from SPS11 onwards.
HANA_Logs_LogBuffers
Description: Ececuting this script shows you the log buffer wait count ratio and the log buffer race count ratio. Both should not be more than 1. In the example below you can see the log switch wait count ratio is little high for node 17.
Action: There can be several ways to investigate this. You can increase the log buffer size and/or number of log buffers. Refer to OSS note 2215131 – Alert Log Switch Wait Count Ratio. Note the caution mentioned in the note on the side effect of increasing the log buffer size too much.
There can also be I/O issues at the hardware level which may slow down the write speed from log buffers to persistence hence increasing the wait time for log buffer switch. You can download the hardware configuration check tool from OSS note 1943937 – Hardware Configuration Check Tool – Central Note and compare the throughput of your system with the benchmark provided in the pdf document attached to the note.
Another reason can be sub-optimal distribution of data if you are using a scaled-out landscape. If the load on some servers are too high compared to other nodes, it may lead to a high log buffer wait count ratio on those nodes. You can perform a table redistribution operation to ensure that tables are optimally distributed across all nodes so that only few nodes are not overloaded.
HANA_Tables_ColumnStore_PartitionedTables
Description: You can use this script to check the partitioning details of all tables. You can update the parameter “MIN_NUM_PARTITIONS” to 100 in the modification section of the script so that the output of the script shows only those tables that has more than 100 partitions. Having too many partitions can cause adverse performance impact as any query which is not using the partitioning column in their where clause will have to scan all the partitions. Also there is a limit on the maximum number of partitions allowed for one table, 1000 till SPS09 and 16000 from SPS10 onwards. So you need to keep a watch on the tables having the highest number of partitions.
Action: 1. First you need to find out the hosts where all the partitions reside. From HANA studio under the system, right click on the “catalog” and click on find table. Enter the table name and go to the “Runtime information” tab. At the bottom section “Details for table” under the sub-section “Parts” you can see the hosts where the partitions reside. See screen shot below.
From here you can see the table has 6 partition groups each partition groups having several partitions each.
2. Execute the SQL query – alter table <table_name> move partition <partition group no.> to ‘<host>:<port>’, where partition group no is the number that you see on the left half of the screen shot above, to move all the partitions belonging to that partition group to one host. Execute this SQL query for all partition groups of the table.
Note that you can only move partition groups, you cannot move single partitions.
3. Merge the partitions – alter table <table_name> merge partitions
4. Run the ABAP report RSDU_TABLE_CONSISTENCY for the table by selecting the check “CL_SCEN_PARTITION_SPEC” first by selecting the “Store” option and then the “Repair” option.
Now you will see the number of partitions in the table has decreased significantly.
HANA_RowStore_Overview
Description: Having a bigger row store has multiple disadvantages like – consumption of more memory, increase in database startup time (as the row store is entirely loaded into memory during DB startup). So it’s always beneficial to keep the row store slim and trim.
Row store consists of memory segments 64 MB each. Each of the memory segments further consists of fixed size pages. When a large number of records are deleted from row store tables, it creates empty pages across multiple memory segments. This causes the row store to have a bigger size even though the used space is much less.
Executing this SQL script gives you an output where you can see the amount of space that is fragmented and the fragmentation %.
If the fragmentation % is more than 30, then it’s better to perform a row store reorganization. The reorganization moves all the used pages from the sparse segments into other segments and hence frees up segments which are then released causing reduction in its size.
Action: There are 2 ways to perform row store reorganization – online and offline. SAP recommends to perform offline reorganization to achieve maximum compaction ratio. Refer to OSS note 1813245 – SAP HANA DB: Row store reorganization on how to perform a row store reorganization. Note that online reorganization has been discontinued from HANA 2.0 SPS02.
After running a row store reorganization, it may again get fragmented after few months depending on how much data is getting inserted and deleted. So you should check this at least every 6 months to see of there is significant fragmentation in the row store.
HANA_Tables_ColumnStore_AutoCompressionDisabled
Description: Auto compression should be enabled for all non-BW tables. After every delta merge, the mergedog triggers the compression of the table/table partitions automatically. (However, note that manual execution of delta merge does not trigger compression automatically, you need to do that manually). Now this happens only for tables that have auto compression enabled. So, you can imagine, if you have lots of tables with auto compression disabled, it can not only lead to greater consumption of memory but also disk space and increase backup size as well.
This script gives you the table names that have auto compression disabled and the command to enable it. By default the modification section of this script excludes the BW tables as auto compression is not expected to be enabled for BW tables.
Action: Execute the generated “alter table …” commands to enable auto compression of the tables. For more information, refer to the OSS note 2112604 – FAQ: SAP HANA Compression.
HANA_Tables_ColumnStore_AutoMergeDisabled
Description: All non-BW tables should have auto merge enabled. This will ensure that the delta store does not grow too much as it has performance implications. The delta store contains uncompressed data to speed up insert/update queries. The main store is read optimized. So data from delta store needs to be merged regularly with main store to improve the read performance of the queries and also to reduce memory consumption as tables/table partitions are compressed automatically after a merge operation.
However, note that BW tables use smart merge option which is controlled by the SAP application. There are certain parameters in the mergedog section under indexserver.ini which controls this. Auto merge should not be enabled for BW tables otherwise they may interfere with the smart merge decisions and cause adverse impact.
You can use this SQL script to find out the non-BW tables (by default the modification section of the script excludes BW tables) that have auto merge disabled.
Action: Execute the “Alter table …” commands from the output of the SQL script to enable auto merge for the non-BW tables. For more information, refer to the OSS note 2057046 – FAQ: SAP HANA Delta Merges.
HANA_Tables_ColumnStore_TablesWithoutCompressionOptimization
Description: You can use this script to find out the tables that have never been considered for compression optimization. If there are several big tables listed here, it can cause a significant increase in memory consumption of the database.
Action: Execute the update command generated by the script to perform a force compression on the table. You can do this only for big tables and ignore small ones. For e.g. in the above screen shot except the first table, all the other tables are very small in size and need not be considered. For more information, refer to the OSS note 2112604 – FAQ: SAP HANA Compression.
HANA_Tables_ColumnStore_ColumnsWithoutCompressionOptimization
Description: This script gives you the list of column store table columns without advanced compression. If there are table columns with more than a million records without compression, it can significantly increase the memory consumption.
Action: Execute the update command from the output of the script to perform compression of those table columns. For more information, refer to the OSS note 2112604 – FAQ: SAP HANA Compression
HANA_Tables_RowStore_TablesWithMultipleContainers_1.00.90+
Description: Row store tables usually consists of a single container, but sometimes more containers can get added, e.g. when adding columns. If multiple containers per row store table is enabled, then adjusting certain table structures like adding columns becomes faster. However it can also introduce problems like performance overhead and terminations.
Using this SQL query you can find out the which row store tables has more than 1 containers. The script also outputs the SQL command to convert the tables into single container tables.
Action: Execute the “Alter table …” command from the output of the SQL script to convert the multi container tables into single container.
From HANA 2.0 SPS01 adjusting row store tables (like adding columns) will no longer generate multiple containers. Refer to OSS note 2222277 – FAQ: SAP HANA Column Store and Row Store section 22. What are row store containers and which problems can be related to them? for more information.
5 Basic Step to Successful SAP HANA Migration
SAP HANA is the most important technology innovation from SAP in the last decade and almost all SAP products released after SAP HANA have an extensive ability to integrate into this platform.
SAP HANA is not just a traditional database; it is an in-memory data platform deployable on premise or in the cloud, and empowers you to accelerate business processes, deliver more business intelligence with advanced innovations and capabilities to run your business faster and simplify your IT environment.
“HANA is attached to everything we have.” Bill McDermott, CEO, SAP
When implemented properly, SAP HANA delivers outstanding results in terms of performance, integration capabilities, analytic intelligence, data processing and improves ROI of your SAP landscape with faster time to value. This article is designed to help you plan and execute a successful technical migration to SAP HANA platform and aimed to provide a high-level overview of the most important, yet often overlooked steps, with a focus to reduce the risk of your organization’s journey to SAP HANA.
Step 1: Correctly size your HANA landcape
Sizing your SAP HANA landscape is a vital and fundamental step when creating your technical project plan and helps you realize the maximum benefit from your investment while reducing long-term total cost of ownership. Inadequate and over-provisioned sizing could lead excess capacity and bloated hardware while under-provisioning can cause unexpected delays that will increase the cost of operational performance.
For the most part, sizing of SAP HANA database is based on the main memory which is determined by the amount of the actual data stored in memory. Since the data is compressed in HANA and the compression results depends on the used scenario, it is not easy to guess the amount of memory needed for sure. Therefore, the memory sizing for SAP HANA should be performed using SAP Quick Sizer tool, relevant SAP notes and SAP sizing reports.
Correct sizing of SAP HANA consists of three main steps:
- Memory sizing for static and dynamic data
- Disk sizing for persistence storage
- CPU sizing for transactions, queries and calculations
Step 2: Choose the right platform and migration strategy
You can migrate to SAP HANA quickly and seamlessly by choosing the right platform that best fits your business needs, budget, and resources. SAP HANA can be deployed on premise for maximum control and reduced risk, or in the cloud for increased flexibility, scalability and faster time to value.
With on premise deployment; you can choose a certified SAP HANA appliance from one of SAP’s hardware partners. The preconfigured appliance with preinstalled software (by the hardware vendor) will help you harness the real-time power of SAP HANA in-memory platform – behind your own firewall. With the preconfigured approach, you will get the solution validated by both SAP and hardware vendor. On the other hand, SAP HANA Tailored Data Center Integration (TDI) provides you more flexibility. You can significantly reduce infrastructure costs and simplify your SAP HANA integration by leveraging existing hardware and operations in your own data center.
There are variety of cloud deployment scenarios. SAP also has its own private cloud offering called SAP HANA Enterprise Cloud. It includes an SAP HANA software license, underlying cloud infrastructure, and SAP-managed services. Public cloud, IaaS offerings allow you to bring your own SAP HANA license to run on third-party public cloud providers: Amazon Web Services, Google Cloud Platform, IBM Bluemix Cloud Platform and Microsoft Azure.
After you decide your HANA deployment scenario, you need to select the most effective migration strategy to reduce any unforeseen problems and eliminate longer system downtime during the technical migration so you can realize faster time to value.
Classical migration is the first and mostly used (so far!) approach for OS/DB migration which is basically a heterogeneous system copy using classical migration tools such as SWPM, R3load and Migration Monitor. If your system does not require any version update to run on SAP HANA, and you only need to perform the actual migration; for instance, you are migrating your Business Suite or BW system to SAP HANA as it is without any additional component requirement; classical migration approach would probably be the best way.
Another option is Database Migration Option (DMO) of SUM which combines system update, technical migration and unicode conversion (if required) with an optimized migration procedure from ABAP-based SAP system running on anyDB to SAP HANA. DMO of SUM offers simplified migration steps (hence less error-proneness), reduced manual effort compared to classical migration, and only one business downtime period which can also be optimized depending on the scenario. Source database is consistent with this approach, continues to run and it is not modified therefore it can be reactivated with only medium effort in case of a fall-back. So, even though it’s a new method, it is a completely safe option.
If you have lots of technical issues with your existing system and would like to proceed with greenfield approach with selective data migration for SAP HANA; then it might be better to perform a fresh installation on SAP HANA. This option can also be an efficient approach for companies that has mostly standard SAP processes and planning to move S/4HANA Cloud with relatively smaller data foot print.
Step 3: Cleanse your data
SAP HANA offers a lot of advanced analytics and opportunity to optimize your business operations by analysing large amounts of data, in real time. It runs faster than all traditional databases we know so far, but you know what is better?
Run it even faster with reduced cost!
Data cleansing is one of the critical activities you must perform before bringing your SAP systems into HANA and unfortunately it is also the most overlooked step. There are three most important benefits from cleansing your data:
- Reduced data foot print will also reduce your infrastructure, hardware and SAP HANA licensing costs
- Reduced data size allows you to perform the technical migration with reduced business downtime
- By keeping only quality and necessary data in your system, SAP HANA performs even better after the technical migration
Step 4: Apply high implementation standards
It is generally a bad idea to cut corners during a technical migration project, and you need to allocate the time to get it right. Do not to take shortcuts during this phase of the project and focus on keeping high standards in your activities. You need to be methodical and understand all required activities from planning to cutover.
Your technical team should have plenty of experience and understanding of technical migration guidelines, relevant SAP notes and best practices. If things go wrong while performing the technical migration, you will be better prepared and less likely to miss the solution.
Make sure your source systems are ready for their journey to SAP HANA. Even though your source system version is supported for the migration, it is better to be on the latest release. SAP delivers latest fixes and solutions to common problems with every release and support package. Make sure your system has these in place before starting a migration project.
Don’t take unnecessary risks especially when migrating your database. You will never have too many backups; therefore, make sure you have full backups and archive logs to allow regular restore points. Eliminate common risks to ensure a smooth technical migration.
“Short cuts make long delays.” J.R.R. Tolkien
If you want your project to be a success, then do not take shortcuts and always keep high standards.
Step 5: Do a proof of concept
You need to perform a “Proof of Concept” in a sandpit environment first, so you can “validate” your migration process to an SAP HANA platform. You can do this by copying your SAP Production system to create an SAP Sandpit environment and perform the actual migration on this system first.
This is a crucial step for any technical migration process and I can assure you that the cost of duplicating the environment is well worth it, because;
- It gives you an idea of how long it takes
- You can identify possible issues in the sandpit system and reduce the project risk
- It facilitates business to realize the power of SAP HANA
- It helps you make important decisions in your project plan and improve overall project productivity
- You will have more testing and validation time in a non-critical environment
- It provides a sense of momentum throughout project
“For the things we have to learn before we can do them, we learn by doing them.” Aristotle
SAP HANA Basic Structure
SAP HANA Database is Main-Memory centric data management platform. SAP HANA Database runs on SUSE Linux Enterprises Server and builds on C++ Language.
SAP HANA Database can be distributed to multiple machines.
SAP HANA Advantages are as mentioned below:
- SAP HANA is useful as it’s very fast due to all data loaded in-Memory and no need to load data from disk.
- SAP HANA can be used for the purpose of OLAP (On-line analytic) and OLTP (On-Line Transaction) on a single database.
SAP HANA Database consists of a set of in-memory processing engines. Calculation engine is main in-memory Processing engines in SAP HANA. It works with other processing engine like Relational database Engine(Row and Column engine), OLAP Engine, etc.
Relational database table resides in column or row store.
There are two storage types for SAP HANA table.
- Row type storage (For Row Table).
- Column type storage (For Column Table).
Text data and Graph data resides in Text Engine and Graph Engine respectively. There are some more engines in SAP HANA Database. The data is allowed to store in these engines as long as enough space is available.
SAP HANA Architecture
Data is compressed by different compression techniques (e.g. dictionary encoding, run length encoding, sparse encoding, cluster encoding, indirect encoding) in SAP HANA Column store.
When main memory limit is reached in SAP HANA, the whole database objects (table, view,etc.) that are not used will be unloaded from the main memory and saved into the disk.
These objects names are defined by application semantic and reloaded into main memory from the disk when required again. Under normal circumstances SAP HANA database manages unloading and loading of data automatically.
However, the user can load and unload data from individual table manually by selecting a table in SAP HANA studio in respective Schema- by right-clicking and selecting the option “Unload/Load”.
SAP HANA Server consists of
- Index Server
- Preprocessor Server
- Name Server
- Statistics Server
- XS Engine
1. SAP HANA Index Server
SAP HANA Database Main server are index server. Detail of each server is as below:
- It’s the main SAP HANA database component
- It contains actual data stores and the engine for processing the data.
- Index Server processes incoming SQL or MDX statement.
Below is the architecture of Index Server.
1. Session and Transaction Manager: Session Component manage sessions and connections for SAP HANA database. Transaction Manager coordinates and control transactions.
2. SQL and MDX Processor: SQL Processor component queries data and send to them in query processing engine i.e. SQL/SQL Script / R / Calc Engine. MDX Processor queries and manipulates Multidimensional data (e,g. Analytic View in SAP HANA).
3. SQL / SQL Script / R / Calc Engine: This Component executes SQL / SQL script and calculation data convert in calculation model.
4. Repository: Repository maintain the versioning of SAP HANA metadata object e.g.(Attribute view, Analytic View, Stored procedure).
5. Persistence layer: This layer uses in-built feature “Disaster Recovery” of SAP HANA database. Backup is saved in it as save points in the data volume.
2. Preprocessor Server
This server is used in Text Analysis and extracts data from a text when the search function is used.
3. Name Server
This Server contains all information about the system landscape. In distributed server, the name server contains information about each running component and location of data on the server. This server contains information about the server on which data exists.
4. Statistic Server
Statistic server is responsible for collecting the data related to status, resource allocation / consumption and performance of SAP HANA system.
6. XS Server
XS Server contains XS Engine. It allows external application and developers to use SAP HANA database via the XS Engine client. The external client application can use HTTP to transmit data via XS engine for HTTP server.
SAP HANA Landscape
“HANA” mean High Performance Analytic Appliance is a combination of hardware and software platform.
- Due to change in computer architecture, the more powerful computer is available in terms of CPU, RAM, and Hard Disk.
- SAP HANA is the solution for performance bottleneck, in which all data is stored in Main Memory and no need to frequently transfer data from disk I/O to main memory.
Below are SAP HANA Innovation in the field of Hardware/Software.
There are two types of Relational data stores in SAP HANA: Row Store and Column Store.
Row Store
- It is same as Traditional database e.g. (Oracle, SQL Server). The only difference is that all data is stored in row storage area in memory of SAP HANA, unlike a traditional database, where data is stored in Hard Drive.
Column Store
- Column store is the part of the SAP HANA database and manages data in columnar way in SAP HANA memory. Column tables are stored in Column store area. The Column store provides good performance for write operations and at the same time optimizes the read operation.
Read and write operation performance optimized with below two data structure.
Main Storage
Main Storage contains the main part of data. In Main Storage, suitable data compression Method (Dictionary Encoding, Cluster Encoding, Sparse Encoding, Run Length encoding, etc.) is applied to compress data with the purpose to save memory and speed up searches.
- In main storage write operations on compressed data will be costly, so write operation do not directly modify compressed data in main storage. Instead, all changes are written in a separate area in column storage known as “Delta Storage.”
- Delta storage is optimized for a write operation and uses normal compression. The write operations are not allowed on main storage but allowed on delta storage. Read operations are allowed on both storages.
We can manually load data in Main memory by option “Load into Memory” and Unload data from Main memory by “Unload from Memory” option as shown below.
Delta Storage
Delta storage is used for a write operation and uses basic compression. All uncommitted modification in Column table data stored in delta storage.
When we want to move these changes into Main Storage, then use “delta merge operation” from SAP HANA studio as below:
- The purpose of delta merge operation is to move changes, which is collected in delta storage to main storage.
- After performing Delta Merge operation on sap column table, the content of main storage is saved to disk and compression recalculated.
Process of moving Data from Delta to Main Storage during delta merge
There is a buffer store (L1-Delta) which is row storage. So in SAP HANA, column table acts like row store due to L1-delta.
- The user runs update / insert query on the table (Physical Operator is SQL statements.).
- Data first go to L1. When L1 moves data further (L1- Uncommitted data)
- Then data goes to L2-delta buffer, which is column oriented. (L2- Committed data)
- When L2-delta process is complete, data goes to Main storage.
So, Column storage is both Write-optimized and Read-optimized due to L1-Delta and main storage respectively. L1-Delta contains all uncommitted data. Committed data moves to Main Store through L2-Delta. From main store data goes to the persistence layer (The arrow indicating here is a physical operator that send SQL Statement in Column Store). After Processing SQL Statement in Column store, data goes to the persistence layer.
E.g. below is row-based table:
Table data is stored on disk in linear format, so below is format how data is stored on disk for row and column table.
In SAP HANA memory, this table is stored in Row Store on disk as format;
And in Column, data is stored on disk as;
Data is stored column-wise in the linear format on the disk. Data can be compressed by compress technique.
So, Column store has an advantage of memory saving.
Industrial Internet of Things (IIoT) Timeline
Smart Home of the Future
How to reset the selected output format for ALV
If ‘Always use selected format’ is checked, then when doing export of ALV List or ALV Grid, it will always be exported with fixed format, no popup for selecting new format.
1) In ALV list, you can enter the function code &RESET_EXCEL in the command field to reset the default setting. This can be done easily by enduser self. For administrator, can execute program SALV_BS_ADMIN_MAINTAIN via SE38 tcode to delete the default entry.
2) In ALV grid, for end user themselves, can reset it as below:
Right click in ALV grid -> Select ‘speadsheet’ context menu -> close the popup of selecting format -> export again now a popup will be displayed for selecting new format. For administrator can execute program SALV_BS_ADMIN_MAINTAIN to delete the default entry.
Here need to pay attention to that from release 7.02, function code &RESET_EXCEL doesn’t work and program SALV_BS_ADMIN_MAINTAIN doesn’t work correctly, so need to apply note 1716802 to resolve function code &RESET_EXCEL issue and note 1590484 to correct SALV_BS_ADMIN_MAINTAIN first.
Difference Between 2k, 4k, 8k, 16k Logical Page Size in SAP ASE
The main difference between a 2K, 4K, 8K, 16K page size is the amount of data stored on it. The larger the page, the more rows you can pack on it. This in turn translates into more rows brought into memory with each disk I/O.
If you are migrating from a larger page size to a lower page size, it typically means you did not research the topic, started with a large page size, and are wasting memory with each I/O, bringing more rows into memory than you are using.
Typically large page sizes benefit DSS (Decision Support System) applications, where you really want to operate on all the rows that you are bringing into memory with each I/O, so they are really not suitable for OLTP (On Line Transactional Processing) environments. Typically smaller page sizes benefit OLTP applications, where you might bring a single page into memory only to process one row.
If you are migrating from a lower page size to a larger page size, it typically means that you are you may have a mixed OLTP and DSS environment.
The following steps can be taken to check the page size:
1. Log into the SAP ASE server as the database user.
2. Run the following SQL statement:
1>SELECT @@maxpagesize
2>go
The value of the maximum page size will be displayed.
SAP BASIS and Monitoring
SAP slogan says “Run Simple!”. This is may be suitable for the business and end users. But as far as the BASIS teams is concerned, managing the growing and diversifying SAP landscapes and lifecycle is huge thing and it is not simple. The move to new SAP technologies (such as HANA, Sybase, BOBJ, virtualization, cloud, etc.) has requires more requirements, demands and knowledge on the BASIS, Infrastructure and IT Operations teams. One example of this includes deployments of SAP HANA. You will need to protect it with a Business Continuity strategy for High Availability and Disaster Recovery.
SAP monitoring and incident management are big parts of SAP support and regardless who is responsible for it, needs to be performed.
Literally thousands of alerts and metrics make up what is commonly known as CCMS (Computer Center Management System). Eventually, these will be neglected as they take so much time to monitor and as the number of systems multiply, they quickly become impractical to manage.
To the help to automation of basic BASIS monitoring task, a basic checklist is useful for spot checks, and regular health check of the system, and may contain the following for an ABAP system:
Transaction | Description |
DB01 | Database Lockwaits & Deadlock |
DB02 | Database Allocation |
DB12 | Backup / Restore Information |
DB13 | Scheduled DB jobs |
ST04 | DB Performance Analysis (or use DBACOCKPIT) |
SM50 | Process Overview (SM51 or SM66 other view) |
SM13 | Update Records |
SM21 | System Log |
SM37 | Select Background jobs (or RZ01 for graphical view of jobs) |
SM12 | Lock Entry List |
SM04 | Users (or AL08 for other view) |
SP01 | Spool: Request Screen |
SP12 | TemSe Administration Consistency Check |
SM35 | Batch input: Initial Screen |
ST22 | ABAP Dump Analysis |
ST03N | Workload: Analysis |
ST02 | Tune Summary |
ST06 | O/S Monitor |
RZ20 | CCMS Monitor |
Start looking to a system with these transactions includes answers of below questions:
- What to look for?
- What to do if a problem is found?
- How often should they be checked?
- How to set thresholds?
Literally thousands of alerts and metrics make up what is commonly known as CCMS (Computer Center Management System).
Monitoring Areas (by order of importance)
- Database Lock waits & Deadlocks
- Spool Management
- Database Allocation & Utilization
- Backup / Restore Information
- Scheduled DB jobs
- DB Performance Analysis
- Application Server Resource Utilization: Processes, Dispatcher Queues
- Update Records
- tRFC & qRFC Queues
- System Logs (App, DB, OS)
- Background jobs
- Application Locks and Enqueue
- Users
- ABAP Dump Analysis
- Workload: Application Performance Analysis
- O/S Monitor
- Connectivity
- Security
The post SAP BASIS and Monitoring appeared first on Key to Smart.