This tutorial will teach you the basics of SAP HANA. The tutorial is divided into sections such as SAP HANA Basics, SAP HANA-Modeling, Reporting, and SAP. SAP HANA Tutorial in PDF - Learn SAP HANA starting from Overview, In Memory Computing Engine, Studio, Studio Administration View, System Monitor. Here I have collected some good HANA PDF training tutorial materials from their official website. It will take tome to go through all these documents. So you can.
|Language:||English, Spanish, French|
|Distribution:||Free* [*Registration needed]|
SAP HANA Tutorial for beginners - Learn SAP HANA step by step with real time project scenarios through HANA video tutorials, PDF training material. S/4 HANA is the new Kid in town and we need to learn its nuisances. Tutorials on S/4 HANA ABAP for SAP HANA Interview Questions & Answers. Class Summary SAP HANA is an in-memory computing platform that allows real- time data analysis. This tutorial will give you the insight of SAP.
From general ledger accounting to financial close, this guide will review the key technical and functional knowledge you need to pass with flying colors. But please, be aware that you wont receive this material here, in SCN, simple because no one could shared.
If you want to know more : Mar. Clearly he has done his home work and has thought through the topics. Thumbs up for this course. You can plan and monitor dates, costs, revenues, budgets, resources, materials, and so on, in these structures using the relevant tools and reports from SAP PS.
It will help me for certification too. However, with the advent of data-intensive concepts like the Internet of Things IoT or Big Data , SAP made a strategic decision to invest in developing its own database. Users need to register first in order to download or read the SAP pdf books STechies This is the reliable online store for sap certification material.
SAPVITS aim to make you an expert in s4 hana simple finance by providing sap simple finance pdf,ppt, training material, tutorial, videos etc that empowers student graduates with the required SAP proficiency to power-boost their career. Balaji pro is the best nline training institute and they are providing demos and materials also. Those of you who prefer to get certified on-site at an SAP training center instead can still do so. I could plan and take exam at short notice on same day.
Ajax Truclient advantages: Robust support for different Ajax control such as slider, calendar, accordion… As free open source software, JMeter can be used by anybody. In certain SAP environments such as 6. Discover free online tutorials on Lessons Note: In How did we decide which protocol should we use?
The tool is designed loadrunner scripting tutorial generate a specific load on the application. Data Format Extension — Code Generation. What is scenario? Our experts prepared these Loadrunner interview questions to accommodate freshers level to most experienced level technical interviews. Using think times within a LoadRunner script is the method for providing emulation of a realistic business In this case, manual correction will be required.
After Protocol enhancements Core to the HP performance engineering solutions is our unmatched application technology support. NET Framework 2. The key components in high-level landscape of SAP Fiori architecture are given below. Load Testing Interview Questions 1. All the above functions are starting with the word "web" indicating that they are web protocol specific functions and cannot be used outside Web protocol. It is one of the complex protocols where we face lot of issues during scripting, but HP has provided predefined correlation rules, which will help you the scripting easy.
It will help you take full advantage of Silk Performer's ease of use and leading-edge functionality. MicroStrategy 9. Scripts are developed interactively in Mozilla Firefox.
What are the protocol JMeter supports? IT support more than 10 protocols, here are some of the protocols that JMeter supports. This new capability enables recording of Poll, Long Poll, and Push interactions.
LoadRunner is an automated testing tool which permits to run program functionality testing before, during and after it is spread to the end users. In this case it will be executed by the Microsoft Windows Script Host.
Better Support for Web Technologies. NeoLoad 6. LoadRunner Script: It is an option to select scripting language along with few settings related to the script. Full download loadrunner tutorial from search results. It does not record GUI actions. This online course is geared to make you a Loadrunner pro!
This is a guide for the savvy manager who wants to capitalize on the wave of change that is occurring with Web Services, service-oriented architecture, and—more recently—Cloud Computing.
The value i captures is not just numeric but it is Alphanumeric. Web Protocol Asynchronous Support: Support for asynchronous behavior has been added. Most organizations already know the identity of users because they are logged in to their Active Directory domain or intranet.
Microstrategy develops software which helps organization to analyze external and internal data to make a business decision and to develop mobile applications. It covers qtp, selenium, automation testing, manual testing, lte, java, mysql and many more programming languages. VuGen Virtual user generator Browse archived documents by topic. The RealPlayer protocol is deprecated as of LoadRunner LoadRunner is the performance testing tool to observe the performance of the applications under various load conditions.
Before we dive into it does loadrunner support binary security tokens. Web Load Testing Tutorial This tutorial will assist you in the process of using Silk Performer to load-test Web applications, and get you up and running as quickly as possible. Web Services - Web Services are a programmatic interface Load Runner - Software Testing Tools For having the knowledge on Load Runner Specifically, you need to know the following knowledge and skills: Components such as web servers, application servers, database servers, operating systems, networks and network elements such as load balancers.
Is there any way to record Window download box in Loadrunner 8. Hi Chid, Good idea to pull this information together in one place for ease of reference. Basically I worked on 9. The LoadRunner is a performance testing tool which scripging initially designed by the company Mercury Interactive. You may adapt the tests in this tutorial to any of your own web applications.
We provide the Loadrunner online training also for all students around the world through the Gangboard medium. Vuser generator - For generating Scripts. In HTTP, web browsers typically act as clients, while an application running on the computer hosting the web site acts as a server. Define Run logic hierarchy of Blocks and Actions and number of interactions for both Actions and Blocks. Correlation if needed. Ensure your applications are built for performance with SAP LoadRunner by Micro Focus This load testing software lets you simulate and run workloads, benchmark system performance, diagnose problems, and re-test prior to launching new or modified functionality on the SAP HANA platform and other environments.
Signup now on Alightpro for the Loadrunner tutorial under real-time experienced trainers. I needed to find a simple way to authenticate the users of web service.
My following notes are intended to help a performance tester to fix possible errors in the LoadRunner scripts. Tools involved cater to the web performace testing- like Loadrunner, Rational Performance Tester LoadRunner - translate info from version.
New Features of LoadRunner 9. LoadRunner Protocol: LoadRunner To add new HANA system, host name, instance number and database user name and password is required. Port should be open to connect to Database 2. Port Instance No 10 3.
Port Instance No 00 4. Enter HANA system details, i. Click on Next and then Finish. Catalog and Content Catalog It contains all available Schemas i. These models are organized in Packages. The content node provides different views on same physical data. From System Monitor, you can drill down into details of an individual system in Administration Editor.
The following Information is available in System Monitor: It enables to create modeling views at the top of database tables and implement business logic to create a meaningful report for analysis. There are three types of Information Views, defined as: Storing data in Column tables is not a new thing.
Earlier it was assumed that storing data in Columnar based structure takes more memory size and not performance Optimized. So, similar data types come together as shown in the example above. It provides faster memory read and write operations with help of In-Memory Computing Engine.
In a conventional database, data is stored in Row based structure i. Storing Data in Columnar based table has following benefits: In Dictionary Compressed, cells are stored in form of numbers in tables and numeral cells are always performance optimized as compared to characters.
In Run length compressed, it saves the multiplier with cell value in numerical format and multiplier shows repetitive value in table.
Column Store: Types of Column based storage Functional Difference: Row vs Column Store It is always advisable to use Column based storage, if SQL statement has to perform aggregate functions and calculations.
Column based tables always perform better when running aggregate functions like Sum, Count, Max, Min. Row based storage is preferred when output has to return complete row. The example given below makes it easy to understand. Functional In the above example, while running an Aggregate function Sum in sales column with Where clause, it will only use Date and Sales column while running SQL query so if it is column based storage table then it will be performance optimized, faster as data is required only from two columns While running a simple Select query, full row has to be printed in output so it is advisable to store table as Row based in this scenario.
Information Modeling Views Attribute View Attributes are non-measurable elements in a database table. They represent master data and similar to characteristics of BW. Attribute Views are dimensions in a database or are used to join dimensions or other attribute views in modeling.
Important features are: It has at least one fact table that has measures and primary keys of dimension tables and surrounded by dimension tables contain master data. Calculation Views Calculation Views are used on top of Analytic and Attribute views to perform complex calculations, which are not possible with Analytic Views.
Calculation view is a combination of base column tables, Attribute views and Analytic views to provide business logic. It contains actual data and engines for processing that data. Index Server also has Session and Transaction Manager, which manage transactions and keep track of all running and closed transactions. Index Server: It segments all query requests and direct them to correct engine for the performance Optimization.
It contains several engines and processors for query execution: Transaction and Session Management It is responsible to coordinate all database transactions and keep track of all running and closed transactions. When a transaction is executed or failed, Transaction manager notifies relevant data engine to take necessary actions. Session management component is responsible to initialize and manage sessions and connections for SAP HANA system using predefined session parameters.
Persistence layer provides built in disaster recovery system for HANA database. It ensures database is restored to most recent state and ensures that all the transactions are completed or undone in case of a system failure or restart. It is also responsible to manage data and transaction logs and also contain data backup, log backup and configuration back of HANA system. Backups are stored as save points in the Data Volumes via a Save Point coordinator, which is normally set to take back every minutes.
Index Server uses preprocessor server for analyzing text data and extracting the information from text data when text search capabilities are used.
Statistical Server is responsible for collecting the data related to system resources, their allocation and consumption of the resources and overall performance of HANA system. It also provides historical data related to system performance for analyses purpose, to check and fix performance related issues in HANA system.
This agent provides all the information about HANA database, which include database current state and general information.
Studio Repository holds the code which does this update. Each View has different structure for Dimension and Fact tables.
Dim tables are defined with master data and Fact table has Primary Key for dimension tables and measures like Number of Unit sold, Average delay time, Total Price, etc. Example of Measures: Number of unit sold, Total Price, Average Delay time, etc. Dimension Table contains master data and is joined with one or more fact tables to make some business logic. Dimension tables are used to create schemas with fact tables and can be normalized.
Example of Dimension Table: Customer, Product, etc. Suppose a company sells products to customers. Every sale is a fact that happens within the company and the fact table is used to record these facts. Fact and Dimension Table For example, row 3 in the fact table records the fact that customer 1 Brian bought one item on day 4.
And, in a complete example, we would also have a product table and a time table so that we know what she bought and exactly when. The fact table lists events that happen in our company or at least the events that we want to analyze- No of Unit Sold, Margin, and Sales Revenue.
The Dimension tables list the factors Customer, Time, and Product by which we want to analyze the data. Schemas are created by joining multiple fact and Dimension tables to meet some business logic. Database uses relational model to store data. However, Data Warehouse use Schemas that join dimensions and fact tables to meet business logic.
There are three types of Schemas used in a Data Warehouse: Each Dimension is represented by only one dimension and is not further normalized. Dimension Table contains set of attribute that are used to analyze the data. We have four Dimension tables: Star Schema: Normalization is used to organize attributes and tables of database to minimize the data redundancy. Normalization involves breaking a table into less redundant smaller tables without losing any information and smaller tables are joined to Dimension table.
Snowflakes Schema: This is called Snowflakes schema where dimension tables are further normalized to smaller tables. Galaxy Schema: New tables can be created using the two methods given below: HANA Studio: Once the statement is executed, we will get a confirmation message as shown in snapshot given below: Execute SQL statement Execution statement also tells about the time taken to execute the statement.
Once statement is successfully executed, right click on Table tab under Schema name in System View and refresh. New Table will be reflected in the list of tables under Schema name.
Insert statement is used to enter the data in the Table using SQL editor. You can right click on Table name and use Open Data Definition to see data type of the table.
Column Store or Row Store. Define data type as shown below. Once columns are added, click on Execute. New Table will be reflected in the list of tables under chosen Schema. Below Insert Option can be used to insert data in table.
Select statement to see content of table. This can be done by going to SQL editor and running this query: When we right click on the Package we get 7 Options: When you right click on Package and click on New, you can also create sub packages in a Package. You have to enter Package Name, Description while creating a Package. They are used to join Dimension tables or other Attribute Views.
How to Create an Attribute View? Choose the Package name under which you want to create an Attribute View. Enter Attribute View name and description. From the drop down list, choose View Type and sub type. In sub type, there are three types of Attribute views: Standard, Time, and Derived.
When you enter the Attribute name, Type and Subtype and click on Finish, it will open three work panes: When you click on Add Object in Data Foundation, you will get a search bar from where you can add Dimension tables and Attribute views to Scenario Pane.
Adding Objects at Data Foundation Once joining is done, choose multiple attributes in details pane, right click and Add to Output. All columns will be added to Output pane. Now Click on Activate option and you will get a confirmation message in job log. Now you can right click on the Attribute View and go for Data Preview.
Attribute View: Adding Attributes to Output Pane Note: When a View is not activated, it has diamond mark on it. However, once you activate it, that diamond disappears that confirms that View has been activated successfully. Once you click on Data Preview, it will show all the attributes that has been added to Output pane under Available Objects.
These Objects can be added to Labels and Value axis by right click and adding or by dragging the objects as shown below: Analytic views use real power of SAP HANA to perform complex calculations and aggregate functions by joining tables in form of star schema and by executing Star schema queries. Fact table contains primary key for each Dim table and measures. How to Create an Analytic View? Choose the Package name under which you want to create an Analytic View.
When you click on an Analytic View, New Window will open. Click on Data Foundation to add Dimension and Fact tables. Click on Star Join to add Attribute Views. In the example given below, 3 dim tables have been added: Now change the data type of Facts, from fact table to measures.
Click on Semantic layer, choose facts and click on measures sign as shown below to change datatype to measures and Activate the View. Defining measures Once you activate view and click on Data Preview, all attributes and measures will be added under the list of Available objects. There is an option to choose different types of chart and graphs. These are used to perform complex calculations, which are not possible with other type of Views.
How to create a Calculation View? Choose the Package name under which you want to create a Calculation View. When you click on Calculation View, New Window will open. You can use two types of Calculation View: Graphical and SQL Script. It is used to consume other Attribute, Analytic and other Calculation views. Data Category Cube, in this default node, is Aggregation.
You can choose Star join with Cube dimension. Dimension, in this default node is Projection. All Fact tables can be added and can use default nodes in Calculation View. Example The following example shows how we can use Calculation View with Star join: Copy and paste the below script in SQL editor and execute.
Dim Tables: First change both Dim tables to Dimension Calculation View. Create a Calculation View with Star Join. In Graphical pane, add 2 Projections for 2 Fact tables. Add both fact tables to both Projections and add attributes of these Projections to Output pane. Add parameters of Fact Join to output pane. Choose parameters in Output pane and active the View. Star Join Once view is activated successfully, right click on view name and click on Data Preview.
Add attributes and measures to values and labels axis and do the analysis. Benefits of using Star Join It simplifies the design process. You need not to create Analytical views and Attribute Views and directly Fact tables can be used as Projections.
Create Projections of both Analytical Views and Join them. Add attributes of this Join to output pane. Now Join to Projection and add output again. Activate the view successful and go to Data preview for analysis.
You can assign different types of right to different users on different component of a View in Analytic Privileges. Sometimes, it is required that data in the same view should not be accessible to other users who do not have any relevant requirement for that data. Now if you do not want your Report developer to see Salary details or Emp logon details of all employees, you can hide this by using Analytic privileges option. We cannot add measures to restrict access in Analytic privileges.
New window will open. There is also an option to copy an existing Analytic Privilege package. Once you click on Add button, it will show you all the views under Content tab. Selected View will be added under reference models.
Now to add attributes from selected view under Analytic Privilege, click on add button with Associated Attributes Restrictions window. Add objects you want to add to Analytic privileges from select object option and click on OK. In Assign Restriction option, it allows you to add values you want to hide in Modeling View from specific user. Status message — completed successfully confirms activation successfully under job log and we can use this view now by adding to a role.
That view will be added to user role under Analytic Privileges. To delete Analytic Privileges from specific user, select view under tab and use Red delete option.
Use Deploy arrow mark at top or F8 to apply this to user profile.
It allows you to import data from workbook format. A business user, who does not have any technical knowledge, uses Information Composer. It provides simple functionalities with easy to use interface. Information Composer helps to extract data, clean data, preview data and automate the process of creation of physical table in the HANA database.
How to upload data using Information Composer? It allows us to upload large amount of data up to 5 million cells. Link to access Information Composer- http: You can perform data loading or manipulation using this tool. One can find details of tables created using IC under these tables.
Using Clipboard Another way to upload data in IC is by use of the clipboard. Copy the data to clipboard and upload it with help of Information Composer. Information Composer also allows you to see preview of data or even provide summary of data in temporary storage. It has inbuilt capability of data cleansing that is used to remove any inconsistency in data.
Once data is cleansed, you need to classify data whether it is attributed. IC has inbuilt feature to check the data type of uploaded data.
Final step is to publish the data to physical tables in HANA database. User Roles for using data published with Information Composer Two set of users can be defined to use data published from IC. This role does not allow the user to upload or create any information views using IC. Client Requirements: You do not need to recreate all tables and information models as you can simply export it to new system or import to an existing target system to reduce the effort.
This option can be accessed from File menu at the top or by right clicking on any table or Information model in HANA studio. Users can use this option to export all the packages that make a delivery unit and the relevant objects contained in it to a HANA Server or to local Client location. The user should create Delivery Unit prior to using it. You can see list of all packages assigned to Delivery unit.
This will export the selected Delivery Unit to the specified location. Developer Mode This option can be used to export individual objects to a location in the local system. User can select single Information view or group of Views and Packages and select the local Client location for export and Finish. This is shown in the snapshot below.
This can be used when requested. User creates an Information View, which throws an error and he is not able to resolve. In that case, he can use this option to export the view along with data and share it with SAP for debugging purpose.
To export the landscape from one system to other. This option can be used to export tables along with its content. Data from Local File This is used to import data from a flat file like. It also gives an option if you want to keep the header row. It also gives an option to create a new table under existing Schema or if you want to import data from a file to an existing table.
You can do the data preview and can check data definition of the table and it will be same as that of. You can choose from a server or local client.
The user need not trigger the activation manually for the imported views. Click Finish and once completed successfully, it will be populated to target system. Developer Mode Browse for the Local Client location where the views are exported and select the views to be imported, the user can select individual Views or group of Views and Packages and Click on Finish.
Configure the System for Mass Import and click Finish. Click Finish after that. These reporting tools enable Business Managers, Analysts, Sales Managers and senior management employees to analyze the historic information to create business scenarios and to decide business strategy of the company.
This generates the need for consuming HANA Modeling views by different reporting tools and to generate reports and dashboards, which are easy to understand for end users. WebI uses a semantic layer called Universe to connect to data source and these Universes are used for reporting in tool.
IDT supports multisource enabled data source. However, UDT only supports Single source.