Quantcast
Channel: SCN : Document List - SAP BusinessObjects Web Intelligence
Viewing all 244 articles
Browse latest View live

Tips for Optimizing the Performance of Web Intelligence Documents

$
0
0

Tips for Optimizing the Performance of Web Intelligence Documents


DRAFT DISCLAIMER - This document is a work in progress and will be released 1 chapter at a time.  Please follow, bookmark and subscribe to updates to ensure you are notified of the latest changes.  It is also a living document and we would love to hear your feedback and tips & tricks!  Comment or private message anything you would like to see added, changed or removed.
.

Document History

.
DateWhoWhat
10-01-2014Jonathan BrownCreated Initial Document Structure and Completed Chapter 1 - Client Side Performance
10-02-2014Jonathan BrownMade some minor updates to the formatting and some links
10-08-2014Jonathan BrownStarted on Chapter 2 - Process Best Practices Started on Chapter 2 - Process Best Practices
10-09-2014Jonathan BrownFinished Chapter 2 - Fixed some formatting issues
10-15-2014Jonathan BrownUpdated  Introduction to discuss overlap with SCN DOC http://scn.sap.com/docs/DOC-58532
10-17-2014Jonathan BrownStarted Chapter 3.  TIPS 3.1 - 3.4 Added.
10-22-2014Jonathan BrownAdded Tips 3.5 and 3.6
10-24-2014Jonathan BrownAdded Tips 3.7 - 3.9 to complete Chapter 3.
10-31-2014Jonathan BrownStarted Chapter 4.
11-07-2014Jonathan BrownCompleted Chapter 4 and Published the latest version of Doc.
11-14-2014Jonathan BrownCompleted Chapter 5.
12-01-2014Jonathan BrownModified list of functions that can turn off caching as per Matthew Shaw's suggestion in comments
12-09-2014Jonathan BrownStarted Chapter 6.
12-15-2014Jonathan BrownFinished Chapter 6.
12-18-2014Jonathan BrownAdded Link to Ted Ueda's blog about sizing
02-19-2015Jonathan BrownAdded Tip 4.9 on SL Security impacts on Performance
05-21-2015Jonathan BrownAdded Tip 3.10 - Mandatory vs Optional prompts -- Started Chapter 7
12-04-2015Jonathan Brown- Added Tips 4.10, 5.5 and 1.7.
- Also added link to NEW Performance Testing Pattern Book in Introduction Section
12-21-2015Jonathan Brown- Added Tip 3.11 and updated Tip 3.7

.

Introduction

.
This document will become a central repository for all things Web Intelligence &  Performance related.  It is a living document and will be growing over time as new tips, tricks and best practices are discovered.  We encourage suggestions and contradictions on the content within and hope the community will collaborate on this content to ensure accuracy.
.
Pleasefeel free to bookmark this Document and received Email notifications on updates.  I would also love to hear your feedback on the contents of this doc so feel free to comment below, private message me, or just like and rate the document to give me feedback.
.
I am the writer of this document but information contained within is a collection of tips from many sources.  The bulk of the material was gathered from within SAP Product Support and from the SAP Products & Innovation / Development teams.  Some of the content also came from shared knowledge on the SAP Community Network and other like websites.
.
The purpose of this document is really to bring awareness to known issues, solutions, and best practices in hopes of increasing the throughput of existing hardware, improving the end user/consumers experience, and to save time and money on report design/consumption.
.
The Origin of this idea was from an America's SAP User Group session that was presented in Sept 2014.  That presentation spawned this document as well as another high level best practices document found here:  Best Practices for Web Intelligence Report Design
.
Where the purpose of this document is to focus on Performance of Web Intelligence Documents, the Best Practices Guide above will cover high level best practices across Web Intelligence in general.  There is a lot of overlap between this document and the Best Practices document referenced above as they both spawn from the same source presentation of the ASUG User Conference.
.
The 2014 ASUG Session Presentations for Web Intelligence can be found here:  2014 ASUG SAP Analytics & BusinessObjects Conference - Webi
** NEW ** (12/2015)
BI Platform 4.x Performance Testing Pattern Book V1.0 Released!
.
.

Chapter 1 - Client Side Performance

.
Client side performance tips and tricks cover anything that is specific to the client machine.  This includes the HTML, Applet and Rich Client Interfaces as well as the Browser that the client uses to a certain degree.
.

TIP 1.1 - Use HTML Interface for Faster viewing/refreshing of Reports

.
The HTML Interface is a light-weight thin client viewer.  It uses HTML to display and edit the Webi Documents.  Since it is a thin client application that requires little more than displaying and consuming HTML, it is a great choice for those users that want fast document viewing and refreshing in their browser.
.
The HTML Interface does have a few less features in comparison with the Applet Interface and you will have to weigh the benefits of performance vs functionality.
.
Chapter 1.4 of the Webi User Guide covers the differences between the HTML, Applet and Rich Client Interfaces.  Review this to help you make a decision whether or not the HTML Interface will do everything you need it to do.
.
Here is a screenshot example of what the feature comparison matrix looks like in the user guide
.
.interface_comparison_example.png
Below is a link to our Web Intelligence documentation page on our Support Portal.  Go to the End User Guides section to find the latest Webi User Guide documentation.
.
.
Here is also a direct link to the BI 4.1 SP04 (most current at time of this writing)
.

TIP 1.2 - Upgrade to BI 4.1 SP03+ for single JAR file Applet Interface

.
BI 4.x introduced a new architecture for the Applet Interface, aka Java Report Panel/Java Viewer.  Previous versions were a single JAR file called ThinCadenza.jar.
.
BI 4.0 and earlier versions of BI 4.1 split this architecture out into over 60 jar files.  This was done for ease of maintenance and deployment originally but Java updates later made this architecture more cumbersome.  Java security updates and restrictions that are now enforced by default have made the performance of this new architecture too slow in many cases.
.
BI 4.1 SP03 and above have reverted back to a single .jar file deployment.  This will often improve performance on the client side due to a reduced number of security and validation checks that have to happen on each .jar file.
.
The below What's New Guide talks about this change briefly.  It should mostly be invisible to the end users though.  Except for maybe the improved performance.
.
.
This KBA also covers this issue in a limited fashion:
.
.

TIP 1.3 - Ensure Online Certificate Revocation Checks aren't slowing down your Applet Interface

.
Online Certificate Revocation Checks are turned on my default in newer versions of the Java Runtime Engine (JRE).  These basically tell the client side JRE to go out to online servers to validate the certificates that the applet jar files are signed with.  On slower networks, this can add a lot of overhead.
.
Older versions of the JRE did not have this enabled by default so it wasn't an issue.
.
Since BI 4.x had 60+ jar files to load for the Applet, it could potentially take much longer to run these checks across 60+ files.  On slower internet connections, this could equate to several minutes of delays!.

 

I talk about this in much more detail in the following Wiki and KBA:
.

TIP 1.4 - Make sure JRE Client Side Caching is working

.
When troubleshooting client side JRE performance issues, one of the first things you want to check is that JRE Caching is enabled and WORKING.  We have seen issues with performance when caching was either disabled, configured incorrectly, or was just not working because of some sort of system or deployment issue.
.
One example is on a Citrix deployment.  Since each user can potentially have a unique and dynamic "Users" folder, the cache may not be persistent across sessions.  Setting the cache to a common location that can be persistent across sessions may help in this type of scenario.
.
We cover a lot more on how to enable and validate JRE cache in my Wiki below
.
.

TIP 1.5 - Ensure you are not running into these known JRE Security Change issues

.
A number of Java security updates and changes have caused issues with the Applet Interface.  The known issues are well documented and can be found on this Wiki:
.
.
This is divided into individual sections for the known issues on different XI 3.1 and BI 4.x versions.
.
Here are direct links for the BI 4.0 and BI 4.1 known issues pages
.
While these are not technically performance issues, they will slow down your end users and will cause delays in viewing, modifying and refreshing documents and instances.
.
SAP only releases Patches/Support Packs every few months so when Oracle security changes come into play, there can sometimes be a bit of a delay before we can have a patch out to resolve/address the change.  Keep this in mind when pushing the latest and greatest Oracle JRE updates out to your clients.
.

TIP 1.6 - Choose the right client - Webi Rich Client vs HTML vs Applet Interfaces

.
Each of the Interfaces has a list of pros and cons.  Choosing the right client interface for Web Intelligence is about striking a balance between functionality, performance and convenience.
.
Chapter 1.4 of the Webi User Guide covers the differences between the interfaces.  Reading and understanding this should help you decide which interface to use.  It's often not as cut and dry and standardizing on only one interface though.  Some users may like the HTML interface for viewing of documents but prefer the Rich Client Interface for creating and editing documents.  It is really up to the user which interface they use.
.
Use the Portal link below to find the latest Webi User Guide.  Chapter 1.4 covers the interface differences.
.
.
Here is also a direct link to the BI 4.1 SP04 (most current at time of this writing)
.
.
As a general guideline, we recommend the following use cases for the interfaces:
.
Webi HTML Interface
  • Best interface for report consumers who will mostly be running predesigned reports and doing only light modifications
  • The HTML interface utilizes the 64-bit backend servers but lacks some of the design capabilities of the Applet Interface
.
Webi Applet Interface
  • Best interface for report designers and power users who will be creating, modifying and doing advanced analysis of documents and data.
  • This interface takes advantage of 64-bit backend servers and can generally handle larger amounts of data/calculations as it utilizes backend servers to do the heavy lifting.
  • Since this is a Web Application, timeouts can occur when leaving the session idle or when carrying out long running actions.

.

Webi Rich Client Interface

  • This stateless interface has almost all of the features and functionality that the Applet interface does plus a few additional features of its own.  This should be used by advanced designers and power users that wish to have a stable design environment for larger documents
  • Can be used with local data sources and some desktop type data sources such as Excel and Access
  • Also can be used in 3-tier mode which takes advantage of backend servers for data retrieval

 

** TIP 1.7 - Hide the Left Panel to gain some time when opening documents

.

Documents will open faster if hide the left panel for a document by default.  This is because it does not need to spend the extra time to render the different components that make up that left panel until they are needed.  This will give the end user a faster initial load time and will postpone the delay to the point they need to see that left panel.  If they never need to open the left panel, then that time is saved for the entire workflow!

.

Note:  This does require the users to know how to find that panel again.  So some user training might be necessary.  This option is set Per User / Per Client as well so once a user minimizes this in a particular viewer, it should persist until they expand it again.

.

As an added benefit, the users will see more of the report on initial view and will have to do less scrolling to see the whole width of the page.

.

Here is where the option is:

side panel minimized.png

.

Be sure to save your document with it minimized to benefit from this tip

.

Recommendation: Save your documents with the left panel minimized to help reduce the load time of a document.  Especially helpful for report consumers who will not be doing any modification of the documents.

.

Chapter 2 - Process Best Practices

.

When we talk about "Process" Best Practices, we are really talking about the workflows around how we utilize Web Intelligence reports in our Business Processes.
.
This chapter will cover a number of Best Practices that will allow you to build good business processes or workflows around your Web Intelligence documents
.
Let's get started!
.

TIP 2.1 - Schedule reports to save time and resources

.

This may seem like a no-brainer but we see countless support incidents come through that could be avoided with a simple process around when to schedule a document vs view it on-demand.
.
The Best Practices threshold for Scheduling is 5 minutes.  If a report takes more than 5 minutes to refresh and render, then that report should be scheduled.
.
Scheduling allows for a user or administrator to offload the processing of a document to a backend server so they are not forced to sit and wait for the report to finally come up on their screen.
.
Benefits of Scheduling Documents
  • Provides lower user wait times when implemented correctly
  • Allows us to offload processing to non-peak times
  • Can help reduce sizing requirements for concurrent users
  • Reduces impact on Database during Peak times
  • Can combine Instances with Report Linking to produce smaller, faster documents

.

Studies have shown that in today's world, end users are unlikely to wait for more than 5 seconds for a video to load.  For example, if you are on YouTube and click the Play button, would you wait 5 minutes for that video to load up and start playing?  I think most of us would give up or try to refresh the video again after about 10-20 seconds.
.
This holds true for Web Application users too.  If a report doesn't view within a minute or two, the consumer is very likely to close the request and try again, or just give up all together.  The danger in them submitting the request again is that they are using up even more resources on the backend servers when they do this.  Here's a workflow as an example:
.
  1. UserA logins to BI Launchpad and navigates to the "Monster Finance Report" document
  2. UserA Views this document and clicks the refresh button to get the latest data
  3. After about 2 minutes, UserA is not sure what is going on.  The report appears to be refreshing still, but given the fact that UserA is impatient, he suspects that the refresh has "hung" and closes the viewer.
  4. UserA decides to test his luck and submit the request again.  This essentially creates a new request for the same data and potentially works against BOTH requests as they compete for resources on the BI servers and the database side.
  5. After a few more minutes, UserA gives up and moves on.  Meanwhile he has no idea the amount of resources and time he's wasted in the background.

.

In the above scenario a few bad things happened:
  • UserA Never got his report and had a bad experience
  • Backend resources were wasted without any usable results
.
Both of these could have been avoided by building processes around proper use of scheduling.
.
Here are some tips on how to best utilize scheduling:
.
  1. Educate your users to schedule anything that takes over 5 minutes to run
  2. Encourage users to Schedule reports that they know they will need throughout the day to non-peak hours before their day begins
  3. Schedule Documents to formats that you know your end users will want such as Excel, Text, or PDF.  This can save time and resources during the day
  4. Utilize Publications when multiple users have different requirements for the same documents

.

For more information on Scheduling Objects and Publications, see the below links

.

DOC - BI 4.1 SP4 BI Launchpad User Guide - Chapter 7 - Scheduling Objects

.

DOC - BI 4.1 SP4 BI Launchpad User Guide - Chapter 10-11 - Publications

.

TIP 2.2 - Use the Retry options when Scheduling to Automate Retries

.

Although this isn't really a true performance tip, I do find that it is a best practice that goes hand in hand with scheduling.  It often amazes me how many people are not aware of the Retry functionality within the Schedule (CMC Only) Dialog.  This feature allows you to configure your scheduled instances to retry X number of times and after X number of seconds if a failure occurs.

.

Here is a screenshot of this option in BI 4.1

.

.retries.png

.

Where this tip DOES save you time is in hunting down and manually rescheduling reports that may have failed due to database issues or resource issues on the BI Platform side.  Intermittent failures are usually tied to resources somewhere in the process flow so simply setting up retries a few minutes apart can help in limiting the number of true failures we see in a busy environment.
.
This option can be set in the Default Settings/Recurrence section of the Schedule Dialog or under the Schedule/Recurrence section.  The difference between the two is that the Default Settings option will set the default retry values for any future schedules.  Setting it under the Schedule section only sets it for that particular schedule.
.
NOTE:  It is important to note that this option is only available in the CMC and not through BI Launchpad currently
.

TIP 2.3 - Use Instance Limits to help reduce the # of Instances in your environment

.

This is another little known feature that you can use to help improve the performance of your system.  The feature is called Instance Limits and you can set it on a Folder or Object Level.

.

The basic concept is that you can set limits on the # of instances a folder or object will keep.  If the limit is exceeded, the CMS will clean up the oldest instances to help reduce the amount of metadata and resources that is stored in the CMS database and on the Filestore disk.

.

Here are the basic instructions on how to enable and set limits, as found in the CMC Help guide:

.

Setting limits enables you to automatically delete report instances in the BI platform. The limits you set on a folder affect all objects in the folder.

At the folder level, you can set limits for:

  • The number of instances for each object, user, or user group
  • The number of days that instances are retained for a user or a group

.
Steps to enable Instance Limits in the CMC

  1. Go to the Folders management area of the CMC.
  2. Locate and select the folder for which to set limits, and select Actions/Limits.
  3. In the Limits dialog box, select the Delete excess instances when there are more than N instances of an object check box, and enter the maximum number of instances per object the folder can contain before instances are deleted in the box. The default value is 100.
  4. Click Update.
  5. To limit the number of instances per user or group, Click the Add button beside Delete excess instances for the following users/groups option.
  6. Select a user or a group, click> to add the user or group to the Selected users/groups list, and click OK.
  7. For each user or group you added in step 6, in the Maximum instance count per object per user box, type the maximum number of instances you want to appear in the BI platform. The default value is 100.
  8. To limit the age of instances per user or group, click Add beside the Delete instances after N days for the following users/groups option.
  9. Select a user or a group, click> to add the user or group to the Selected users/groups list, and click OK.
  10. For each user or group you added in step 9, in the Maximum instance age in days box, type the maximum age for instances before they are removed from the BI platform. The default value is 100.
  11. Click Update.

.

Below is a screenshot of the dialog for your reference

.

.instancelimits.png

.

Once you have enabled Instance Limits, you will have better control over the size of your CMS and Input/Output FRS.  A bloated CMS database and Filestore can definitely contribute to a slower running BI system in general so having a handle on this can definitely help keep your system running at top speed.

.

TIP 2.4 - Platform Search Tweaking for Performance

.

Have you ever seen a bunch of resources (CPU/RAM) being used on your BI Platform server without any user activity?  If you have, this is most likely the Continuous Crawl feature of Platform Search doing a whole lot of indexing.

.

What is Platform Search?

.

Platform Search enables you to search content within the BI platform repository. It refines the search results by grouping them into categories and ranking them in order of their relevance.

.

There is no doubt that Platform Search is a great feature!  It is just a factor that needs to be taken into consideration when sizing an environment for Performance.

.

The below Administrators guide talks about this feature and how to configure it:

.

DOC - BI Platform Administrators Guide (BI 4.1 SP4) - Chapter 22 - Platform Search

..

 

When BI 4.0 first came out, support saw a lot of instances where customers were seeing performance degradation and resource issues on their system AFTER migrating the bulk of their content over to the new BI 4.0 system.

.

After an extensive investigation, we discovered that in most of these cases, the issue was the Indexing of this "new" content that was added to the server.

So how does this affect performance?  How can adding new content to a BI 4.x system cause Processing Servers and other resources to spike up?

.

Behind the scenes, the Platform Search application detects that there is new content that needs to be indexed and cataloged.  This means that for every new object (Webi Doc, Universe, Crystal Report, etc...) needs to be analyzed, cataloged and indexed by the Search Service.  To do this, The Platform Search Service, found on an Adaptive Processing Server, will utilize Processing Servers (Webi, Crystal, Etc...) to read the Report contents and generate an index that it can use to map search terms to the content.  Really cool functionality, but with large documents with lots of data, objects, keywords, etc... this can add a lot of overhead to the system.  Especially if a lot of new objects are added at once.

.

By default the indexer is configured to Continuously Crawl the system and index the Metadata of the objects.  If you find this is taking up a lot of resources on your system then you may want to use the Schedule option to control when it runs.  Running indexing outside of regular business hours or peak times would provide you with the best performance

.

Luckily we can configure the frequency and verbosity level used by the Indexer.  These options are discussed in Chapter 22 of the Administrators guide above.

.

In short, be sure to keep Platform Search on your radar in case you have unexplained resource consumption on your server.

.

More Info:

.

KBA - 1640934 - How to safely use Platform Search Service in BI 4.0 without overloading the server?

.

BLOG - What is the optimal configuration for Platform Search in BI 4.x? - By Simone Caneparo

.

.

Chapter 3 - Report Design Best Practices

.

This chapter will discuss some Report Design Best Practices that can help you optimize your report for Performance.  These tips should be considered whenever a new report is being designed.  A lot of these can also be applied to existing reports with little effort.

.

A compilation of Report Design Tips & Tricks, not necessarily related to performance, can also be found in the below document by William Marcy.  This is a great document and is a must see for anyone striving to design better reports.
.

DOC - Webi 4.x Tricks - By William Marcy & various other contributors on SCN.

.

.

TIP 3.1 - Steer Clear of Monster Webi Documents

.

A "Monster Document" is a document that contains many large reports within in.  A Web Intelligence document can contain multiple Reports.  When we are referring to Reports, we mean the tabs at the bottom of a Webi document.  We often use the term Report to mean a Webi Document, but it is important to differentiate between the two.  A document can contain multiple reports.

.

When creating a Document, we need to start with the actual Business Need for that document.  We can do this by asking the stakeholder questions like:

.

  1. What is the primary purpose of this document?
  2. What question(s) does this document have to answer?
  3. How many different consumers will be utilizing this document?
  4. Can this document be split into multiple documents that service smaller, more specific needs?

.

By asking questions like the above, we are drilling in on what the actual needs are and can use the answers to these questions to help eliminate waste.  If we build a Monster Document that accounts for every possible scenario that a consumer may want to look at, then we are potentially wasting a lot of time for both the document designer and the consumer.  For example, if only 10-20% of a large document is actually utilized by the consumer on a regular basis, then that means 80-90% of the document is waste.

.

Once we know the Business Needs of the consumer, we can design a focused document that eliminates much of the waste.

.

Below are a few recommended best practices to keep in mind when building a document:

.

  1. Avoid using a large number of Reports (tabs) within a Document
    1. 10 or less Reports is a reasonable number
    2. Exceeding 20 reports in a single document should be avoided
  2. Creating smaller documents for specific business needs allows for faster runtime and analysis
    1. Utilize Report linking to join smaller documents together.  This is discussed more in TIP 3.2
    2. Aim to satisfy only 1-2 business needs per document.
  3. Provide only the data required for the business need(s) of the Document
    1. 50.000 rows of data per document is a reasonable number
    2. Do not exceed 500.000 rows of data per document
  4. Do not add additional Data Providers if not needed or beyond document needs
    1. 5 data providers is a reasonable number
    2. Do not Exceed 15 data providers per document

.

There of course will be exceptions to the above recommendations but I urge you to investigate other ways of designing your documents if you find your document is growing too large.

.

You will see the following benefits by creating smaller, reusable documents based only on the business needs of the consumers.

  • Reduce the time it takes to load the document initially in the viewer/interface
    • Smaller documents will load quicker in the viewers.  This is because the resources needed to transfer the document and process it initially will be much less with smaller documents.
  • Reduce the refresh time of the document.
    • The larger the document, the more time it will take to process the document during a refresh.  Once the report engine receives the data from the data providers, it has to render the report and perform complex calculations based on the document design.  Larger documents with many variables and large amounts of data can take much longer to render during a refresh.
  • Reduce the system resources needed on the both the client side and the server side.
    • The resources needed to work with a large document are going to be much greater than those needed for smaller documents.  By reducing the size of your documents, you are potentially reducing the overall system resources, such as CPU, RAM, Disk space, that your system will consume on average.  This can equate to better throughput on your existing hardware.
  • Improve performance while modifying the document
    • When modifying a large document, the client and server has to load the document structure and data into memory.  As you add/change/move objects in the reports, this causes client/server communication to occur.  This can slow down the designer as updates require reprocessing on any objects involved.  The more objects in a document, the longer each operation during a modify action can take.
  • Improve performance for the consumer during adhoc query and analysis.
    • Slicing, dicing, filtering and drilling actions will perform quicker on smaller documents as well.  This will equate to faster response times to the consumers as they navigate and do detailed analysis on the documents

.

.

TIP 3.2 - Utilize Report Linking Whenever Possible

.

Report Linking is a great way to relate two documents together.  This can be an alternative to drilling down and allows the report designer better control over the size and performance of their documents.  Report Linking can be used to help reduce the size of single documents by allowing the designer to break out documents into smaller chunks while still allowing them to be related to each other.  This compliments the recommendation to steer clear of Monster Documents very nicely

.

The concept of Report Linking is simple.  You basically embed a hyperlink into a document that calls another document.  This hyperlink can use data from the source report to provide prompt values to the destination report.  Below is an example that explains the concept

.

  • Sales_Summary is a summary report that summarizes the sales for all 100 sites of Company XYZ Inc.
  • Sales_Summary has a hyperlink that allows a user to "drill into" a 2nd Report (Sales_Details) to get the sales details on any of the 100 sites.
  • Sales_Summary is scheduled to run each night and take ~20 minutes to complete.
  • Users can view the latest instance of Sales_Summary which takes only a few seconds to load.
  • Users can drill down into Site Sales data for each of the 100 sites which launches Sales_Details report using Report Linking and a prompt value
  • The prompt value filters the Sales_Details report using a Query Filter so that it only displays the sales details for the 1 site that the user drilled into.

.

In the above scenario, we see many benefits

  1. The Sales_Summary report only contains the Summary details.  Therefore it runs faster than if it contained both summary and detailed data
  2. The Sales_Summary report is smaller and will load/navigate much quicker on its own
  3. The User can drill down and get a much faster response time because the details report only contains the specific data that they are interested in

.

The Web Intelligence User Guide covers this in more details in Section 5.1.3 - Linking to another document in the CMS

.

DOC - Web Intelligence User Guide - BI 4.1 SP04 Direct Link  - Chapter 5 - Section 5.1.3

.

The easiest way to generate these Hyperlinks is using the Hyperlink Wizard.  This Wizard is currently only available in the HTML Interface.  For manual creation of the hyperlinks, you will want to follow the OpenDocument guidelines available in the below link:

.

DOC - Viewing Documents Using OpenDocument

.

Here is a screenshot of the Wizard and where the button is on the toolbar.  It can be a little tricky to find if you haven't used it before:

.

hyperlink wizard-1.png

.

It is important to note that this can add a little more time to the planning and design phases of your Document creation process.  Properly implemented though, this can save your consumer a lot of waiting and will reduce the backend resources needed to fulfill requests

.

When configuring a hyperlink using OpenDocument or the HTML Hyperlink Wizard, you can choose whether or not you want the report to refresh on open, or to open the latest instance.  Our recommendation is to use Latest Instance whenever possible.  This allows you to schedule the load on your database and backend processing server and will reduce the time it takes for the consumer to get their reports.

..

TIP 3.3 - Avoid Autofit When not Required

.

The Autofit functionality allows you to set a cell, table, cross-tab or chart to be resized automatically based on the data.  A cell for example, has the option to Autofit the Height and Width of the cell based on the data size.  The below screenshot shows this feature in the Applet Interface for a cell.

.

autofit.png

.

This is a great feature for the presentation of the report but it can cause some performance delays when navigating through pages or generating a complete document.

.

NOTE:  The default setting for a cell is to enable the Autofit height option.  This could impact the performance of your reports so it is important to no how this can affect performance.

.

How does this affect performance of the report?
.

When autofit is enabled for objects on a report, the Processing Server has to evaluate the data used in every instance of that object in order to determine the size of the object.  This means that in order to skip to a particular page of the report, the processing server would need to calculate the size for every object that comes before that page.  For example, if I have 100,000 rows of data in my report and I navigate to page 1000, then the processing server has to generate all of the pages leading up to page 1000 before it can display that page.  This is because the size of the objects on each page is dynamically linked to the rows of data so it is impossible to determine what rows will be on page 1000 without first calculating the size of the objects for each page preceding it.

.

In short, this option adds a lot more work to the page generation piece of the report rendering process.  A fixed size for height and width allows the processing server to determine how many objects fit on each page and allows it to skip the generation process for pages that are not requested.

.

For another example:  if I have 100,000 rows and have set my objects to fixed width/height, then the processing server knows that 50 rows will fit on each and every page.  If I request page 1000, it will know that the rows on that page will be rows 50,000 to 50,050.  It can then display that page with just those rows in it.  Way quicker than having to generate 999 pages first!

.

--------

.

As you can imagine, this mostly just affects reports that have many rows of data and have many pages.  If you have a report with only a few pages, it probably isn't worth looking at this option.  For larger, longer reports, it might be worth investigating.

.

.

TIP 3.4 - Utilize Query Filters instead of Report Filters whenever possible

.

A Query Filter is a filter that is added to the SQL Statement for a report.  Query Filters limit the data that is returned by the Database server itself by adding to the WHERE clause of the SQL Statement.

.

A Report Filter is a filter that is applied at the Report Level and is only used to limit the data displayed on the report itself.  All of the data fetched from the Database is still available behind the scenes, but the report itself is only showing what is not filtered out.

.

There is a time and a place for both Query Filters and Report Filters but understanding the differences between them is a good way to ensure that you are not causing unnecessary delays in your report rendering and refreshing.  It is best to predesign Query Filters in your Semantic Layer design but you can also add them manually using the Query Panel within Web Intelligence itself.

.
Here is a screenshot of a Predefined Filter being added to a Query in Query Panel
.
predefined query filter.png
.
And here is an example of a similar Query Filter being added manually
.
queryfilter-qp.png
.
In both of the above cases, the WHERE clause of the SQL Statement will be updated to reduce the data returned to the report to filter based on the year.
.
Alternatively, here is a screenshot of a Report Filter that does something similar
.
reportfilter.png
.
In this Report Filter example, the display data is being filtered to the selected year but the data contained in the cube itself still contains ALL years.  This can affect performance so be sure to use Query Filters to limit the data whenever possible.  There is of course scenarios where Report Filters are the better choice for slicing and dicing, but it is just something to keep in mind when designing reports for performance.
.

TIP 3.5 - Avoid Charts with Many Data Points

..

BI 4.0 introduced a new charting engine that is called Common Visualization Object Model, or CVOM for short.  This is a versatile SDK that provides enhanced charting capabilities to Web Intelligence and other SAP Products.  Web Intelligence utilizes CVOM for creating the charts and visualizations found within the Web Intelligence product.  The CVOM service is hosted on an Adaptive Process Server (APS) and is referred to as the Visualization Service.

.

Out of the box, this service is already added to the default APS but depending on the usage of visualizations in your deployment, you will likely want to split out the APS services according to the System Configuration Wizard or APS Splitting Guidelines.

.

If we click the Edit Common Services option on the right-click menu of the APS in the CMC, we will see it listed as the following:

.

vizservice-aps.png

.

The reason this is relevant for performance is this service can become a bottleneck in situations where the generation of charts takes a long time due to resource or sizing issues.  It is important to ensure you are sized correctly with this service to ensure it doesn't become a bottleneck.  We discuss this in more details later on in the Sizing chapter.

.

When we spoke to the developers for the CVOM component and asked them for advice on fast performing visualizations, they gave us a tip based on their testing and development knowledge.  They recommended to avoid using large charts with many data points and instead try to use multiple smaller charts with fewer data points within your reports.

.

The reason behind this is that the CVOM components can produce charts much quicker when they do not have many data points to contend with.  Obviously some business needs may require this still but whenever possible, the recommendation is a smaller number for better performance.

.

DOC - Webi User Guide - Chapter 4.3 - discusses Charting with Web Intelligence

.

.

TIP 3.6 - Limit use of Scope of Analysis

.

As quoted from the Webi User Guide:

.

"The scope of analysis for a query is extra data that you can retrieve from the database that is available to offer more details on the results returned.

.

This extra data does not appear in the initial result report, but it remains available in the data cube, and you can pull this data into the report to allow you to access more details at any time. This process of refining the data to lower levels of detail is called drilling down on an object.

.

In the universe, the scope of analysis corresponds to the hierarchical levels below the object selected for a query. For example, a scope of analysis of one level down for the object Year, would include the object Quarter, which appears immediately under Year.

.

You can set this level when you build a query. It allows objects lower down the hierarchy to be included in the query, without them appearing in the Result Objects pane. The hierarchies in a universe allow you to choose your scope of analysis, and correspondingly the level of drill available. You can also create a custom scope of analysis by selecting specific dimensions to be included in the scope.

.

Scope of Analysis is a great way to provide drill down capabilities and "preload"  the data cube with the data needed for drilling in on dimensions.  Where this can impact performance is with those extra objects being added to the SQL statement behind the scenes.  It is important to note that by adding objects to the scope of analysis, you are essentially adding them to the query that will be run against the database.  This can impact the runtime of the query to be sure to make this decision consciously."

.

As an alternative to Scope of Analysis, Report Linking can be utilized to achieve an "on-demand" type of drilldown.  This can offload the performance hit to only the times where this extra data is required.  Since some report consumers may not drill down on the extra data fetched, it may make sense to exclude it by default and provide OpenDocument hyperlinks (report linking) to the consumers to drill down on the data as needed.

.

Below is a example of using Scope of Analysis to add Quarter, Month and Week in the scope even though the Result Objects only include Year:

.

scope of analysis.png

.

What this essentially does is modifies the query to include Quarter, Month and Week in it.  This of course would return more data and could take longer to return.

.

In short, you should ensure that Scope of Analysis is used consciously and that the majority of report consumers will benefit from it.  An alternative is Report Linking as discussed above.

.

TIP 3.7 - Limit the # of Data Providers Used

.

Best practices from the field is to limit the # of data providers to 15 or less for faster performing reports.  If you have a need for more than 15 data providers, then you may want to consider a different way of combining your data in a single source.  Using a proper ETL Tool and Data Warehouse is a better way to achieve this and pushes the consolidation of data to a data warehouse server instead of the BI Server or Client machine.

.

The current design of the Webi Processing Server is to run Data Providers in series.  This means that each data provider is run one after another and not in parallel as you might expect.  So, the combined total runtime of ALL of your data providers is how long the report will take to get the data.

.

Here is a representation of the combined time it might take for a report with multiple data providers:

.

DP  series.png

.

Another consideration for reports with a lot of data providers is that merging dimensions between multiple sources adds overhead into the processing time.  Keeping it simple will certainly result in a better performing report.

.

UPDATE (BI 4.2) - It is now confirmed that SAP BusinessObjects Business Intelligence Platform 4.2 will include a new feature called parallel data fetching.  This will allow data providers to run in parallel (simultaneously) and will improve performance.  This feature will initially only be available for relational data providers.  More details at link below:

.

     SCN DOC - SAP BI 4.2 Beta - What's New in Web Intelligence

.

.

TIP 3.8 - Don't accidentally Disable the Report Caching

.

Web Intelligence utilizes disk and memory caching to improve the performance of loading and processing documents & universes.  This can provide a faster initial load time for common reports and universes when implemented correctly.

.

The good news is that caching is enabled by default so in most cases this will be happening automatically for you and your users behind the scenes. There are a few situations where cache cannot be used though so we wanted to make sure report designers were aware of these:

.

The following functions will force a document to bypass the cache:

.

CurrentDate()

CurrentTime()

CurrentUser()

GetDominantPreferredViewingLocale()

GetPreferredViewingLocale()

GetLocale()

GetContentLocale()

.

If you use these within your document, then cache will not be utilized.  These functions are quite common so it is important to be aware of the potential impact on caching they can have.

.

At the current time, caching is done at a document level and not an individual Report (tab) level.  Therefore, if these functions are used anywhere in the document, the cached copies will not be used for subsequent requests.

..

TIP 3.9 - Test Using Query Drill for Drill Down Reports

.

What is Query Drill?  As quoted from the Web Intelligence User Guide:

.

"When you activate query drill, you drill by modifying the underlying query (adding and removing dimensions and query filters) in addition to applying drill filters.

.

You use query drill when your report contains aggregate measures calculated at the database level. It is designed in particular to provide a drill mode adapted to databases such as Oracle 9i OLAP, which contain aggregate functions which are not supported in Web Intelligence, or which cannot be accurately calculated in the report during a drill session.

.

Query drill is also useful for reducing the amount of data stored locally during a drill session. Because query drill reduces the scope of analysis when you drill up, it purges unnecessary data."

.

Performance gains can be appreciated by reducing the amount of data that a Webi document stores locally and by pushing some of the aggregation to the database server side.

.

Performance gains may or may not be realized by using this option but it is simple enough to test it out to see if it will improve performance for you.  To enable this option, go into the Document Properties and check the "Use Query Drill" option.  Below is a screenshot of the option:

.

querydrill.png

.

TIP 3.10 - Mandatory Prompts vs Optional Prompts

.

This tip came to me while investigating a support incident.  The customer I was working with noticed that reports took significantly longer to refresh when his prompts were Optional vs Mandatory.  we were seeing a 30 second difference in even one of the more simple reports he had for testing.  We investigated this through the traces and noticed that the SQL Generation functions were executing twice when Optional Prompts were involved and this was adding to the overhead of running the report.
.

This was happening in XI 3.1 SP7 on the customers side so it was with a legacy UNV universe.  I could replicate the issue internally with our simple eFashion universe but since it executes very quickly, the extra time was barely noticeable in my own testing.  I collected my internal logs and BIAR file and asked a developer for a quick review.

.

The developer confirmed that the delay I saw from the SQL Generation functions as suspected.  He then did a code review to see why this was happening.  His explanation was that Optional prompts may or may not have values and therefore the the SQL generation could change after the prompt dialog appears.  For example, if an optional prompt value is not selected, then the Where clause will omit that object.  With Mandatory prompts, the SQL structure will always be the same before and after prompts are selected so it does not need to regenerate the SQL after a value is selected.

.

So, in short, Optional vs Mandatory can give different performance results so it should be considered before choosing one vs the other.  As with many of the other tips in this doc though, this does not mean that you should not use Optional prompts.  They are useful and are often necessary, but they are a factor and as long as you know this, you can optimize your report design.

.

** TIP 3.11 - Function NumberOfPages() will force generation of all pages

.

Be careful when using the function NumberOfPages() as this will force the report to generate all of the pages in order to determine the total page count.  If this function is not used, the document will not generate all of the pages right away.  It will only generate them when they are specifically request, when the document is exported to another format, or when the Last Page navigation button is pressed.

.

This can be a useful function but be aware that it could cause unnecessary delays.  Be sure to evaluate whether or not your end users require this functionality before using it..

..

.

Chapter 4 - Semantic Layer Best Practices

.
Most of the below Best Practices involve the Semantic Layer, or SL as we sometimes refer to it as.  These Best Practices can help you design faster running queries which can result in faster running Webi Docs.

.

TIP 4.1 - Only Merge Dimensions that are needed

.
A Merged Dimension is a mechanism for synchronizing data from different Data Providers.  For example, if your document had 2 Data Providers and each of them has a "Product Name" dimension, you could merge the two different dimensions into a single "Merged" dimension that would contain the complete list of Product Names from each data provider.
.
Web Intelligence will automatically merge dimensions in BI 4.x by default, so you may want to evaluate if there are performance gains you can achieve by reviewing the merged dimensions.  If you do not want your dimensions to be automatically merged, you can uncheck the "Auto-merge dimensions" property in the Document Properties of your reports.
.
We have 2 Document Properties within a Webi document that can affect the merging of dimensions:
.
Auto-merge dimensions -- Automatically Merges dimensions with the same name from the same universe.
.
Extend merged dimension values -- This option will automatically include merged dimension values for a dimension even if the merged dimension object is not used in a table.
.
Merging dimensions has overhead associated to it that can impact the performance of your Webi documents.  If you do not need to have certain dimensions merged within your document, you can simply choose to unmerge them.  This removes the overhead performance hit that is associated with merging those dimensions.  Besides, you can always merge them again later if needed.
.
unmerge.png
.
In short, to squeeze a little extra performance out of your larger reports, it might be worth unmerging dimensions that are not being used as merged dimensions.
.

TIP 4.2 - Build Universes & Queries for the Business Needs of the Document

.
Like any successful project, the key to a successful Webi Document is good planning.  This helps avoid scope/feature creep when you build out the document.  During the planning stage, it is important to determine exactly what the business needs for your document are.  Once you know the business needs, you can build a lean document that only contains the information needed to fulfill those needs.
.
Just like the previous tip that talks about "Monster" documents, we also need to avoid "Monster" queries/universes as well.  The fact is, the larger a universe or query is, the worse the performance and overhead resources will be.  By focusing only on the business needs, we can minimize the size of our queries and optimize the runtime of our documents.
.
As a real-life example, I have seen a report that was built off of a query that contained over 300 objects.  This report pulled back around 500,000 rows of data and took over 45 minutes to complete.  On inspecting the document, only about 1/4 of the objects were used in the document.  When asked "Why" they were using a query that had over 300 objects in it they didn't have an answer.  If we do the math on this,  300 objects x 500,000 rows = 1.5 million cells.  It was likely that this query was designed to account for ALL scenarios that the query designer could account for and NOT based on the business needs of the report consumer.
.
In Summary, it is important to know who will be utilizing the universes and what their needs will be.  You then want to build a lean universe, and supporting queries, that are optimized to suit those needs.
.

TIP 4.3 - Array Fetch Size Optimizations

.
The Array Fetch Size (AFS) is the maximum  # of rows that will be fetched at a time when running a Web Intelligence document.  For example, if you run a query that returns 100,000 rows of data and you have an Array Fetch Size of 100, it will take 1000 fetches of 100 rows per fetch (1000 x 100 = 100,000) to retrieve all of those rows.
.
In newer versions of Web Intelligence, we automatically determine what an optimal AFS should be based on the size of the objects within your query.  For most scenarios, this results in an optimized value that will return the data with the good performance.  Sometimes though, manually setting this value to a higher value can squeeze a little better performance out.
.
I did some quick testing on my side and here are the results that the Array Fetch Size had on my test server:
.
afs.png
.
As you can see above, the time took to run the same query varied based on the AFS value that was set.  The optimized value (which I believe was around 700 behind the scenes) took around 30 seconds.  By overriding this and setting my AFS to 1000, I was able to shave another 12 seconds off to take it down to 18 seconds.  This is great for performance, but keep in mind that this means large packets are sent over the network and extra memory will be needed to accommodate the larger fetches.
.
As I mentioned, by default the optimized value will be used for newly created connections/universes.  To override this and test your own values, you have to disable the AFS optimization using a Universe Parameter called "DISABLE_ARRAY_FETCH_SIZE_OPTIMIZATION".  Setting this to "Yes" will disable the optimization and take the Array Fetch Size value set on your connection.
.
More information on this can be found in the Information Design Tool or Universe Designer Guide referenced below:
.
DOC - Information Design Tool User Guide (BI 4.1 SP4 - Direct Link)
.
.

TIP 4.4 - Ensure Query Stripping is Enabled

.
Query Stripping is a feature that will remove unused objects from a query automatically to improve performance and reduce the data contained in the cube.  Query Stripping was originally only available for BICS based connectivity to BEx queries but was introduced for Relational database connections starting in BI 4.1 SP3.
.
Query Stripping needs to be enabled for Relational Databases through three different options:
.

1.  Enable "Allow query stripping" option at the Business Layer level in the Universe (UNX)

blx-allow query stripping.png

2.  In the Document Properties of the Webi Document

.

docprops-query stripping.png.

3.  In the Query Properties

.

queryprop-query stripping.png

.

It is best to double-check those 3 places when implementing Query Stripping.  If it is unchecked at any level, you may not be benefiting from the Query Stripping feature.

.

There is also a way to tell if it is working.  With Query Stripping enabled, refresh your query and then go back in to the Query Panel and click the View SQL button.  You should see that only objects used in a block within the report are used.  In this example, I am only using 3 of the 6 objects in my report, so the query only selects those objects.

.

querystripping in action.png

.

You can see above that the SQL has been stripped of any unused objects and should run quicker as a result.

.

For BICS based documents, Query Stripping is enabled by default.

.

In summary, you will want to ensure your documents are utilizing Query Stripping to get better performance when refreshing queries.

.

TIP 4.5 - Follow these Best Practices for  Performance Optimizing SAP BW (BICS) Reports

.

There is a lot of great information contained in the below document.  It outlines many best practices for reporting off of SAP BW using the BICS connectivity.  Please review the below guide for more details on optimizing the performance of BICS based Reports.

.

DOC - How to Performance Optimize SAP BusinessObjects Reports Based Upon SAP BW using BICS Connectivity

.

TIP 4.6 - Using Index-Awareness for Better Performance

.

Index-Awareness is described in the Information Design Tool User guide in section 12.7 as:

.

"Index awareness is the ability to take advantage of the indexes on key columns to improve query performance.

.

The objects in the business layer are based on database columns that are meaningful for querying data. For example, a Customer object retrieves the value in the customer name column of the customer table. In many databases, the customer table has a primary key (for example an integer) to uniquely identify each customer. The key value is not meaningful for reporting, but it is important for database performance.

.

When you set up index awareness, you define which database columns are primary and foreign keys for the dimensions and attributes in the business layer. The benefits of defining index awareness include the following:

    • Joining and filtering on key columns are faster than on non-key columns.
    • Fewer joins are needed in a query, therefore fewer tables are requested. For example, in a star schema database, if you build a query that involves filtering on a value in a dimension table, the query can apply the filter directly on the fact table by using the dimension table foreign key.
    • Uniqueness in filters and lists of values is taken into account. For example, if two customers have the same name, the application retrieves only one customer unless it is aware that each customer has a separate primary key."

.

Utilizing Index Awareness can help improve performance as key columns will be utilized behind the scenese in the queries to do faster lookups and linking on the database side.

.

The Information Design Tool User Guide covers Index Awareness in the following chapters:

.

DOC - Information Design Tool User Guide (BI 4.1 SP4) - Chapter 12

.

 

.

 

TIP 4.7 - Using Aggregate Awareness for Performance

.

Aggregate Awareness is described as the following in the IDT User Guide:

.

"Aggregate awareness is the ability of a relational universe to take advantage of database tables that contain pre-aggregated data (aggregate tables). Setting up aggregate awareness accelerates queries by processing fewer facts and aggregating fewer rows.

.

If an aggregate aware object is included in a query, at run time the query generator retrieves the data from the table with the highest aggregation level that matches the level of detail in the query.

For example, in a data foundation there is a fact table for sales with detail on the transaction level, and an aggregate table with sales summed by day. If a query asks for sales details, then the transaction table is used. If a query asks for sales per day, then the aggregate table is used. Which table is used is transparent to the user.

.

Setting up aggregate awareness in the universe has several steps. See the related topic for more information"

.

Utilizing the database to pre-aggregate data can help speed up the performance of your Webi documents.  This is because the Webi Processing Server will not have to do the aggregations locally and will only have to work with the aggregated data that is returned from the database side.

.

Use Aggregate Awareness whenever it makes sense.

.

TIP 4.8 - Utilizing JOIN_BY_SQL to avoid multiple queries

.

The JOIN_BY_SQL parameter determines how the SQL Generation handles multiple SQL statements.  By default, SQL Statements are not combined and in some scenarios, performance gains can be realized by allowing the SQL Generation to combine multiple statements.

.

The JOIN_BY_SQL parameter is found in the Information Design Tool in the Business Layer and/or Data Foundation.  Below is a screenshot of the parameter in its default state.

.

join_by_sql.png

.

By changing this Value to "Yes", you are instructing the SQL Generation process to use combined statements whenever possible.  This can result in faster query execution so it may be worth testing this option out on your universes/documents.


..

.

TIP 4.9 - Security Considerations for the Semantic Layer

.

There is no doubt that security is a necessity when dealing with sensitive data.  The purpose of this tip is to prompt you to review your security model and implementation to ensure it is as lean as it can be.  Performance can definitely be impacted, sometimes quite severely, by the complexity of the security model at both your Semantic Layer, and your BI Platform (Users and Groups) levels.

.

As an example, I have worked on an incident recently where we were seeing roughly a 10-40% performance difference when opening a Webi document with the built in Administrator account vs another User Account.  On closer examination, the user was a member of over 70 groups and a good portion of the load time was spent on rights aggregation and look-ups.

.

We also found that there were some inefficiencies in our code that could be optimized in future Support Packages/Minor Releases.  These should help improve performance for customers who may be unaware of the performance impacts their complex security model may be having.

.

So, some actions you may want to consider for this tip are:

.

  1. Review your business requirements and reduce/remove any unnecessary Data/Business Layer Security profiles at the Universe level.
  2. Consider using the Change State / "Hidden" option in the Business Layer for fields that you do not want any users to see.
  3. Consider using Access Levels in the Business Layer to control which users have access to objects
  4. Reduce the # of User Groups/Roles that your Users are apart of
  5. Test performance with an Administrator User and compare it to a Restricted User to gauge the impact on Performance

.


TIP 4.10 - Try Using The UTF8 Charset For NLS_LANG Environment Variable (Oracle Specific)

.

In our internal testing, it was found that using the UTF8 charset was faster than using other charsets along with the NLS_LANG environment variable.  This affects only Oracle based connections and universes but there was about a 10-15% performance gain when the charset UTF8 was used vs others.

.

For example, you can try setting the Environment variable:

.

NLS_LANG = AMERICAN_AMERICA.UTF8

.

On the Web Intelligence Processing Server and Connection Server machines.  Ensure that you do this either as a System Environment Variable or for the User that is running the Server Intelligence Agent (SIA) on that node.

.

.

Chapter 5 - Formula & Calculation Engine Tips

.
These tips involve some insight from the product developers around how the backend calculation engine handles calculations in regards to performance.
.

TIP 5.1 -  Use Nested Sections with Conditions with caution

.

A Nested section, or subsection as they are sometimes referred to, is a section within a section.  For example, you might have a Country Section and a Region section within it.  This would be considered a "Nested" section.  Nested sections can add overhead into the report rendering/processing time.  This is especially true when you add conditions to the section such as "Hide section when...".  This doesn't mean that you should not use Nested Sections, they are certainly useful for making a report look and feel the way you want it to but you should consider the performance impact before you heavily utilize nested sections within your documents.

.

Here is a example of 4 levels of nested sections from an eFashion based report

.

nestedsections1.png

.

 

Here are the Format Section options that can affect performance when overused:

.

conditions.png

.

When using the conditions within Nested sections, the calculation engine needs to figure out which sections are displayed.  The more nested sections you have, the more overhead there is to figure out which levels of sections are actually visible.  Again, very useful in most cases but for reports with thousands of dimensions in the sections and conditions associated to them, this can impact the performance.

.

 

TIP 5.2 -  Use IN instead of ForEach and ForAll when possible

.

This tip came directly from our developers that work with the calculation engine.  Behind the scenes, the code is much more efficient when processing the IN context vs the ForEach or ForAll contexts.

.

The following Document is available on our help portal.  It covers using functions, formulas, calculations and contexts within a Webi document in more detail:

.

DOC - Using functions, formulas, and calculations in Web Intelligence (BI 4.1 SP3)

.

Section 4.3.1.1 covers the "IN" context operator with examples of how it works.  In short, the IN context operator specifies dimensions explicitly in a context.

.

Section 4.3.1.2 and 4.3.1.3 cover the ForEach and ForAll context operators.  In short, these two functions allow you to modify the default context by including or excluding dimensions from the calculation context.

.

In a lot of cases, IN can be used to achieve similar results to the ForEach and ForAll operators so if you suspect these are contributing to performance issues, try changing your formulas to use IN instead.

.

.

TIP 5.3 - Use IF...THEN...ELSE instead of Where operator when possible

.

In most cases, the IF/THEN/ELSE operators can be used instead of a Where Operator.  This is more efficient from a calculation engine perspective according to our developers.  If you ever suspect that the Where operator is causing performance degradation issues in your report, try swapping it for a IF statement if you can.

.

The following document discusses these operators in more details:

..

DOC - Using functions, formulas, and calculations in Web Intelligence (BI 4.1 SP3)

.

Section 6.2.4.14 covers the usage of the Where Operator and provides examples.

.

Section 6.1.10.11 covers the IF...Then...Else functionality

..

 

TIP 5.4 - Factorize (Reuse) Variables

.

Factorizing variables essentially means to reuse them within other variables.  By doing this, you are reducing the number of calculations that the engine needs to do to calculate the results.

.

Here is an example of what we mean when we say Factorizing variables:

.

v_H1_Sales = Sum([Sales Revenue]) Where ([Quarter] InList("Q1";"Q2"))

.

v_H2_Sales = Sum([Sales Revenue]) Where ([Quarter] InList("Q3";"Q4"))

.

Now we reuse these two to get the Years sales (H1+H2 revenue combined)

.

v_Year_Sales = v_H1_Sales +v_H2_Sales

.

By reusing variables, you are saving the time needed to recalculate values that have already been calculated.  The above is a simple example but applying the same logic to more complex calculations can save you some real time on the calculation side.

.

** TIP 5.5 - Use BIG numbers sparingly

.

SAP BusinessObjects BI Platform version 4.2 will include a new feature that allows the precision of numerical objects to be about 40 digits.  In BI 4.0/4.1, the precision was 15.  This is great if you are using very large numbers and will ensure more accurate rounding of these large numbers.  It does however have an impact on performance and memory.

.

When using big numbers, it is important to remember that there will be a slight decrease in performance within the Web Intelligence Calculator and a slight increase in the memory required for each measure that is defined as a big number

.

To check the type of a measure, you can right click it in the Available Objects panel and choose the "Change Type" option.  Number is the standard (15 digit) precision and Decimal is the new "Big Number" 40 digit precision.

.

change type big numbers.png

.

Recommendation:  Use this feature only when the precision of a number needs to be greater than 15 digits.  This will ensure your performance is not penalized for measures that do not benefit from this feature.

.

Chapter 6 - Sizing for Performance

.

One of the keys to a faster performing report is proper sizing on the back-end components.  More often than not, we see systems that were originally sized correctly for "Day One" usage but have since outgrown the original sizing and are now experiencing performance issues due to resource and concurrency limits.  It is important to size your system for Today's usage as well as the usage for the near future.  It is equally important to have checkpoints a few times a year to ensure you are not outgrowing your original sizing estimates.

.

UPDATE: Ted Ueda has written a great blog that goes over some of these recommendations in greater detail.  Link below:
.

BLOG - Revisit the Sizing for your deployment of BI 4.x Web Intelligence Processing Servers!

.

.The following tips will help you size your system for performance and may help you avoid some common mistakes we have seen in support.

..

TIP 6.1 - Use these Resources to Help you Size your Environment

.

The BI Platform is not sized for performance right out of the box.  Since every installation will have a different set of content, users, nodes, rights, data, etc... it is difficult to do sizing without putting quite a bit of preparation and thought into the exercise.

.

The following resources can help you do a sizing exercise on your environment:

 

DOCSizing and Deploying SAP BI 4 and SAP Lumira

.

DOCSAP BusinessObjects BI4 Sizing Guide

.

XLS - SAP BI 4x Resource Usage Estimator

.

To complete a sizing exercise, you will want to use the above Sizing Guide and Resource Usage Estimator.  You will also need to know quite a bit about the hardware, and the expected usage on the system to do a proper sizing.

.

.

TIP 6.2 - Do not reuse XI 3.1 Sizing on a BI 4.x System (32bit vs 64bit)

.

A common mistake that some administrators will make is to reuse the sizing requirements that they did for XI 3.1 for their BI 4.x environments.  BI 4.x is much more than a simple upgrade and contains quite a few architectural changes that need to be considered in order to size an environment correctly.  One of the biggest changes was the adoption of 64-bit processes in BI 4.x.  This overcomes one of the major sizing variables for XI 3.1 which was memory limits.

.

XI 3.1 was all 32-bit processes.  On Windows systems, this meant that the most memory that could be allocated per process was around 2 Gigabytes.  In most cases, this was limiting the scalability for a system, especially where Web Intelligence was concerned.  The Web Intelligence Processing Server (WIReportServer.exe) would only be able to use up to 2 GB of memory before it would hang or crash.  On a Linux system this could be increased to about  4 GB but could still be easily reached with a few very large report requests.  For this reason, the recommendation was to have multiple Web Intelligence Processing Servers (WIPS) on a single node.  For example, if you had 32 GB of RAM, you might put 12 WIPS on that machine so that a total of 24GB of that RAM.  There was still the risk of a single WIPS exceeding the 2GB ceiling, but with load balancing the odds went way down for you hitting that limit.

.

In BI 4.x, the WIPS is now a 64-bit process.  This means that the 2GB limit is no longer an issue.  In the same example as above, you might want to reduce the number of WIPS to 2 instead of 12.  With 2 x 64-bit servers, you can use all of the RAM that is available on the server and you still get fault tolerance and failover capabilities.  Technically you could just have 1 WIPS but if that one was to crash for some reason, you wouldn't have a 2nd one to handle the failover.

.

There were also some major differences introduced into the Adaptive Processing Server.  These are a big factor when sizing a BI 4.x system and should be taken into account when sizing out a BI 4.x system.  The next tip covers this in more details.

.

In Short, be sure to redo your sizing for your BI 4.x upgrade system.

.

TIP 6.3 - Ensure the Adaptive Processing Server is Split and Sized Correctly

.

When BI 4.0 first came out, a lot of the performance and stability related issues were narrowed down to inadequate resource availability for the Adaptive Processing Server (APS) and it's services.  Out of the box, BI 4.x installs 1 APS with around 21+ services set up on it.  These 21 services provide different functions to the BI Platform and this ranges from Platform Search to Universe connectivity.  If you do not split out and size your APS correctly, you will definitely experience resource and performance issues at some point in your deployment.

.

As far as Web Intelligence is concerned, there are 3 main services that are hosted on the APS that can drastically affect performance of a Web Intelligence document.  These are:

.

  • DSL Bridge Service - Used for BICS (SAP BW Direct access) and UNX (Universe) access
  • Visualization Service - Used to creating the charts and visualizations within Web Intelligence Documents
  • Data Federation Service - Used for Multi-Source Universes and Data Federation

.

The Platform Search Service can also affect Webi Performance as it utilizes Webi Processing Servers to index the metadata of Webi Docs.

.

Shortly after the release of BI 4.0, SAP released an APS splitting guide that assisted System Administrators in splitting the APS to accommodate their system usage.  A link to this guide is found below.  It covers a lot more detail then I will go into here and is a must read for anyone in charge of a BI 4.x deployment.

.

DOC - Best Practices for SAPBO BI 4.0 Adaptive Processing Servers

.

The document talks about the architecture of the APS process and all of the different services that are installed.  There are recommendations on how you can group services together to pair up less resource intensive processes with the ones that require more resources.  This helps strike a balance between # of APS services and performance.

.

There is also a System Configuration Wizard that is available in the Central Management Console (CMC).  This wizard will do some simple T-shirt sized splitting of the APS and can be used as a baseline for new installs.  SAP still recommends that you do a proper sizing exercise in addition to this though.

.

.

TIP 6.4 - Keep Location and Network in mind when Designing your Environment

.

Network transfer rates can play a large part in the performance of a BI environment.  It is important to know where bottlenecks can occur and how to ensure that network is not going to slow down your performance.

.

It is important to have a fast, reliable network connection between the Web Intelligence Processing Server / APS (DSL Bridge Service) and the Reporting Database.  This is because the data retrieved from the Webi Documents will have to be transferred from the Database server to the WIPS or APS process over the network.  In most cases, it is best to co-locate the Processing Servers with the Database in the same network segment but if that is not possible, it is still important to ensure the network between the two is fast and reliable.

.

If you suspect that network is causing performance bottlenecks for you, you should be able to use either BI Platform, Network or Database traces to identify where the bottleneck is.

.

TIP 6.5 - Use Local, Fast Storage for Cache and Temp Directories

.

Some Administrators change the Cache and Temp directories for their Web Intelligence Processing Servers to some sort of Network Attached Storage (NAS) device.

.

This is unnecessary in most cases and unless that network storage is as fast, or faster than local hard drive storage, it could become a bottleneck.  Cache and Temp files for Web Intelligence are non-critical components that do not need to be backed up or highly availability.  If a WIPS doesn't find a Cache/Temp file it needs, then it will simply recreate it.  There is a slight performance hit in recreating a file but with local storage, there is often little chance of a file going missing.

.

With NAS, network issues,can cause outages to the entire file system or network traffic could reduce the performance.  Local disk is much cheaper and is often quicker for Cache and Temp files.

.

 

TIP 6.6 - Ensure your CPU Speed is Adequate

.

The speed of the processor or cores that you have available for a BI system can definitely contribute to the performance of your workflows.  I've seen a scenario where a customer was noticing much slower performance in their Production environment vs their Quality Assurance environment. In digging into the issue, the problem was determined to be the CPU speed.  In Production, they had 128 cores running at 1200 Mhz.  This is great for concurrent requests that run on separate threads.  QA only had 8 cores but the CPU was a 2.8 Ghz processor.  So, when doing a single workflow comparison, QA ran requests much quicker than Production.  Production could handle a high load of concurrent users, but the throughput was quite a bit slower.

.

Nowadays, most machines have a pretty fast processor in them so this might not be something that most people will run into.  Where I have seen this more frequently is when older UNIX machines are being used.

.

TIP 6.7 - Use the BI Platform Support Tool for Sizing Reviews

.

The BI Platform Support Tool (BIPST) is a great tool to use to gather information around your BI 4.x system landscape.  If you haven't used this tool yet, I highly recommend you download it and play with it.  Below is the link to the BIPST portal:

.

WIKI - BI Platform Support Tool

.

The tool can be downloaded from the above link and there is a Webinar available that covers some of the features and how to use them.  The Wiki itself also gives you a good overview of the features of the tool.

.

For sizing reviews, this tool is invaluable as it gives you a real easy overview of the servers that you have and their settings.  It also gives you a good idea of the content and users that you have in your environment which you can use when doing a sizing (or resizing) exercise.

.

.

Chapter 7 - Architectural Differences between XI 3.1 & BI 4.x

.

This section covers some of the main architectural differences between XI 3.1 and BI 4.x as they pertain to Web Intelligence.  Knowing these differences can help during upgrades and new installs.  For an Architectural Diagram for BI 4.x, please see the below link:

.

DOC -BI Platform 4.1 Architecture Diagram

.

I've divided this chapter up into TIPs like the previous chapter but really these are more like sections as they are really only for informational purposes.

.

.

TIP 7.1 -  32-bit vs 64-bit - What does it mean?

.

One of the biggest differences between XI 3.1 and BI 4.x is that a number of the processes have been upgraded to 64-bit applications.  64-bit processes can utilize a lot more memory than 32-bit processes so this can greatly increase the throughput of a single process.  In the 32-bit world, a process could only address up to 4GB of memory in total and on Windows, that was slashed in half by default.  So in the XI 3.1 days, the WIReportServer.exe (Web Intelligence Processing Server) could easily reach the 2GB maximum on a Windows OS and would then become unstable.

.

This update to the 64-bit Processing Server for Webi means that 64-bit Database Client/Drivers could now be utilized as well.  This all does of course require that your Operating System and hardware is 64-bits as well.

.

This change is especially relevant for your Sizing of the BI 4.x system.  In previous versions we recommended that you scale the Webi out on a single node to utilize more than 2GB of the available RAM.  In BI 4.x this is not required as a single Web Intelligence Processing Server (WIPS) can utilize essentially all the available RAM on a machine in a single process.

.

TIP 7.2 -  Hosted Services are more heavily used in BI 4.x

.

BI 4.x may look similar on the surface but another major architectural change on the backend was the shift to service based process flows.  These services are often referred to as Outproc or Hosted services as they are often hosted outside of the process that is utilizing them.  As an example, the WIPS utilizes the DSL Bridge Services that is hosted on the Adaptive Processing Server for Semantic Layer access such as UNX Universe and BICS connectivity.

.

This heavier reliance on Hosted services means that there are more variables to consider when investigating issues or sizing an environment.  To name a few considerations:

  • The process flows contain more services which could be spread across multiple nodes.  This greatly increases the complexity of process flows in some cases
  • Sizing is more complex as you have to consider the impact on multiple processes when estimating the resources needed for a report.
  • Network can be a factor when troubleshooting bottlenecks in workflows

.

In most cases, it is the Adaptive Servers (Job and Processing) that host these Hosted Services so the major change is surrounding proper scale out of these Servers.

 

TIP 7.3 - Larger # of Processes involved in Process Workflows

 

Below is a list of processes that can be involved in certain Web Intelligence workflows in BI 4.x:

 

  • Web Intelligence Processing Server -> Processing and Creation of Web Intelligence documents
  • Visualization Service (APS) -> Generating Charts
  • DLS-Bridge (APS) -> New Semantic Layer and BICS connections
  • Data Federation Service (APS) -> Multi Source Universes
  • Connection Server (64-bits) -> 3 Tier mode Connections
  • Connection Server (32-bits) -> 3 Tier mode Connections
  • Secure Token Service (APS) -> SSO Tickets sessions
  • WebI Monitoring Service (APS) -> Client Monitoring
  • Web Application Server -> Page Rendering
  • Central Management Service -> Authentication, Server Communication, Rights, etc...
  • File Repository Service -> File retrieval / Instance Storage
  • Publication Service (APS) -> Web Intelligence Publication Processing
  • Adaptive Job Server -> Publications and Scheduled Jobs

.

I may have missed one or two, but you get the point.  When we compare this list to XI 3.1, you can see the contrast:

.

  • Web Intelligence Processing Server -> Processing and Creation of Web Intelligence documents
  • Connection Server (32-bits) -> 3 Tier mode Connections
  • Web Application Server -> Page Rendering
  • Central Management Service -> Authentication, Server Communication, Rights, etc...
  • File Repository Service -> File retrieval / Instance Storage
  • Adaptive Job Server -> Publications and Scheduled Jobs

.

For this reason, it is important to know the BI 4.x Workflows fairly intimately.  Below are some links to Interactive Workflows that will help you learn and understand these changed workflows.

.

Here is a list of the Web Intelligence Process Flows that were available at the time of this writing.  For an updated list or for other BI Platform topics, please visit the Official Product Tutorial Page here:

.
          DOC - Official Product Tutorials – SAP BusinessObjects Business Intelligence Platform 4.x

.

  • Set a schedule for a Web Intelligence document      process flow
  • Run a schedule for a Web Intelligence document      process flow
  • View a Web Intelligence document on demand      process flow
  • Export a document      process flow
  • Refresh a document based on a multi-source universe      process flow
  • Refresh a document based on a dimensional universe      process flow
  • Refresh a document in Web Intelligence Desktop in one-tier mode      process flow
  • Refresh a document in Web Intelligence Desktop in two-tier mode      process flow
  • Refresh a document in Web Intelligence Desktop in three-tier mode      process flow
  • Refresh a document based on an SAP NetWeaver BW BEx Query using BICS connectivity      process flow
  • Refresh a document based on an SAP Netweaver BW data using a relational UNX universe      process flow
  • Refresh a document based on a multi-source datasource using a relational UNX universe      process flow
  • Refresh a document based on OLAP data using a multidimensional UNV universe    process flow
  • Refresh a document based on OLAP data using a multidimensional UNX universe    process flow

.

 

MORE UPDATES COMING SOON - FOLLOW DOC FOR UPDATES

.
.

Chapter 8 - Performance Based Improvements / Enhancements

.
COMING SOON - FOLLOW DOC FOR UPDATES

  • Parallel Data Refresh coming in BI 4.2!
.
.
.
.TIP

Hide dimension option - Webi 4.0

$
0
0

Hide dimention option is not working properly when we hide the break applied column using "hide dimension" option in webi 4.0

 

 

Problem Statement:

In webi 4.0, hiding a dimension of break applied column using the hide dimension option, impacts the break properties (Implicit sort gets applied even if disabled)

 

 

Break properties before hiding

 

Capture.PNG

2.png

 

 

Break properties after hiding using hide dimension option

 

3.png

 

Workaround:

 

 

Hide the dimension manually using masking options of white background, white text color and no border. 

Select the column --> Right Clik --> Click on Format cell --> Click on Font tab --> Select the font color as white.

                                                                                                   --> Click on Border tab --> remove the border                                                                                               

 

4.png

Hope this would be helpful.

 

Regards,

Amala.S

SAP BusinessObjects Web Intelligence 4.1: Calculation Engine Changes

$
0
0

This page is part of the...

BI
Upgrade Series

Overview

This document describes the corrections and changes to the calculation engine in Web Intelligence 4.1 compared to Web Intelligence XI 3.1, XI 3.0, and XIR2 SP06 and SP03. It compares the new behavior of the calculation engine to its behavior in the previous versions.

It also suggests migration strategies for accommodating the calculation engine changes.

It gives a description of the formula rewrite mechanism introduced in 4.1 SP03 to preserve the reports created with an older version, from specific changes.

 

 

(Document authored by Pierre Saurel & Pascal Gaulin / Web Intelligence Product Experts)

 

Table of contents

 

Introduction

The calculation engine for Web Intelligence was updated for Business Objects XI 3.0 and 3.1 to include several corrections and improvements. These changes are present in the 4.1 releases.


This document describes these changes and the way they might affect the calculation results in Web Intelligence documents.

 

Where() Operator

"Where" operator works on measures

Prior to XI 3.0, the "Where" operator accurately supported conditions on dimensions or detail objects only. Conditions on measures were possible, but did not always return accurate results.


Web Intelligence XI 3.0 fully supports the usage of measures in "Where" conditions.


More details can be found in the documentation.

 

“Where” operator on measure with a condition on a formula based on a dimension

Previously, dimensions were incorrectly added to the dimensional context of the condition. Now dimensions are only used for the conditional evaluation.


=[Revenue] Where ( DataProviderType(DataProvider([Quarter])) = "Universe";))

 

when used in a table with [Quarter], the result of the formula with the condtion was processed without [quarter] in the table (same value replicated for each different quarter).

 

Document migration:

User can aggregate on  the related dimension in the context of the measure (=[Revenue]  ForAll([Quarter]) Where ( DataProviderType(DataProvider([Quarter])) = "Universe";))

 

From BI 4.1 SP03, to ensure that you receive results for this formula that correspond to the previous document versions, the system automatically rewrites the formula using an ad-hoc parameter with the “where” operator to specify the dimension to take into consideration ((=[Revenue] Where ( DataProviderType(DataProvider([Quarter])) = "Universe";([Quarter]))).

 

This functionality is available as of BI 4.1 SP03 for documents created using the following versions:

 

  • XIR2 all releases
  • XI3.0 all releases
  • XI3.1 SP01 RTM and All FPs
  • XI3.1 SP02 RTM and All FPs
  • XI3.1 SP03 RTM
  • XI3.1 SP04 RTM
  • XI3.1 SP05 RTM
  • BI4.0 SP01 RTM and All Patches
  • BI4.0 SP02 RTM and All Patches
  • BI4.0 SP03 RTM and All Patches
  • BI4.0 SP04 RTM and All Patches

 

For more details, refer to the Automatic Formula Rewrite section, below.

 

Interaction between a context modifier on a measure aggregation and the “Where” operator

Dimensions were incorrectly added as dimensional contexts into the list of dimensions for the context modifiers that have been applied to a measure. This problem happened when "where" operators that used conditions on dimensions were used on expressions that used measures and context modifiers.


Example:

AggregationFct( [measure] forall([dim1]) ) where ( condition on [dim2])

Was processed as: AggregationFct( [measure] forall([dim1];[dim2]) ) where ( condition on [dim2])

Is now processed as: AggregationFct( [measure] forall([dim1]) ) where ( condition on [dim2])


Interaction between a context modifier on a dimension and the “Where” operator

For a “where” operator with a condition on a dimension applied to an expression on a dimension with context modifier, the dimension of the condition was incorrectly added to the context modifier.


Example:

[dim 1] in ([dim 2]) where( condition on [dim1]) was

Interpreted before as: [dim 1] in ([dim 2],[dim1) where( condition on [dim1]) and is

Interpreted now as: [dim 1] in ([dim 2]) where( condition on [dim1])


Migration:

To get the previous behavior, swap the “where” operator and the context modifier. Example: [dim 1] where( condition on [dim1] ) in ([dim 2]).

 

“Where” operator is incorrectly applied when outside of an aggregation expression

For a “where” operator with a condition on a dimension outside an aggregation function, the “where” condition was incorrectly applied before the aggregation calculation. The condition is now applied after the aggregation with the respect to calculation accordingly of the parenthesis.


Example:

AggregationFct ([measure]) Where([dim] ..).

Before, where([dim]) was applied on measure before “agregationFct”.

Now, “aggregationFct” is applied on [measure] and the “Where” is applied after.


Migration:

To get the previous behavior, move the “Where” expression inside the parenthesis. Example: AggregationFct ([measure] Where([dim]…))

 

Filters

NoFilter() function and “In Break” context modifier

When using the NoFilter() function, the filters would be applied when they were not supposed to, if an "In Break" parameter was used. This problem has been fixed and the filters are now ignored, as expected.

 

Using filters on object details with multiple values

Details can have multiple values. When displayed in a table together with the dimension object which they depend on, they could show #MULTIVALUE (when there are multiple detail values for a single dimension value), unless the “Avoid duplicate row aggregation” table setting has been checked.


Filtering on details with multiple values would not select the individual values on rows where they show as #MULTIVALUE. To work around this issue, it was then necessary to check the “Avoid duplicate row aggregation” table setting.


This problem has been fixed: when a filter is applied to an object detail where it shows as #MULTIVALUE, this will correctly select the actual value.

 

Example: We have an object [Range] with a detail [Detail] which has multiple values:

Table with detail.png

We set a filter on [Detail] to select the values “220” (which is part of the #MULTIVALUE) and “350”.


Before the fix: Error: the “220” [Detail] value does not show in the table, although it has been selected in the filter:

With a filter on Detail - before.png

After the fix:The “220” [Detail] value will correctly show in the table, even when the “Avoid duplicate row aggregation” setting is unchecked:

With a filter on Detail - after.png

Versions where this behavior has changed:

  • XI 3.1 since SP7 patch 3
  • 4.1 since SP4 patch 10, SP5 patch 6, SP6 patch 1 and SP7

 

Running Calculations

Running calculations will not reset

After 4.1 SP03, the running calculations will not automatically reset for each new section value. As a result, the calculation for the first cell of a block for a particular section value instance is based on the last cell value of the block from the previous section instance.


Before 4.1 SP03, the running calculation was reset for each new section value.

 

In the example below, the running sum for 2005 (cell in bold) is independent from the running sum for 2004.

 

Reset1.jpg

 

After 4.1 SP03, the running calculation for the current section value is based on the calculation from the previous section. In the example below the running sum for 2005(cell in bold) is based on the running sum for 2004.

 

   Reset2.jpg

Migration:

To keep the original behavior, specify a list of dimensions as a reset parameter (3rd parameter of the function running[Calculation]):

=RunningSum([Sales Revenue];([State])).

 

From 4.1 SP03, to ensure that you receive results for this formula that correspond to the previous document version, the system automatically rewrites this formula accordingly (using the keyword "section" as 2nd operand of the running calculation). This function is available only for documents created before XI 2 SP 05.9 versions. For more details, refer to the following section "Automatic formula rewrite" .

 

Data order in running calculations

A running calculation was not respecting the order of the data but the default order of the result set. The running calculation now takes into account the graphically displayed order of the data (table or chart).


Running calculations in cross tables and reset context

By default “Running Sum” is evaluated in a cross-table following a row direction (from left to right row by row).

With XI.x version, when adding a dimension as reset context (3rd parameter), the “running sum” was improperly evaluated on column based direction (from the top to the bottom column after column).

Now, in this case it is processed following a row direction.

 

Example: =RunningSum([Sales revenue];([State])),

 

Previously:  column direction (wrong) processing:

 

New behavior: row direction processing:

Migration: to get the previous result (processing by column)  with a new version (BI 4.1 SP03), the user can use the value COL as 2nd parameter.

 

From BI 4.1 SP03.3, to ensure that you receive results for this formula that correspond to the previous document versions, the system automatically rewrites the formula using an ad-hoc parameter FORCE_COL with the “RunningSum” function to force the process order to column in ther body of the cross-table.

 

This functionality is available as of BI 4.1 SP03.3 for documents created using the following versions:

  • All XI 3.X versions,
  • BI 4.0 patch 2.20, 2.21
  • BI 4.0 SP5 and all patches
  • BI 4.0 SP06 and patches 6.1, 6.2, 6.3, 6.4
  • BI 4.0 SP07
  • BI 4.1
  • BI 4.1 SP1 and patch 1.1

 

For more details, refer to the section on Automatic Formula Rewrite, below.


Running sums with reset in cross table footers

In cross-table footers, the RunningSum() function will sum up the values of its measure

  • per row if it is in the row footer
  • per column if it is in the column footer


Example:

In the following table, we have a running sum of the measure used in the body, in the column and row footers:

snapshot.png

If this running sum has a reset dimension on one of the cross-table axis, then it will reset its value at the end of this axis. On the other axis, the reset dimension will be ignored. For example, in the footer of each row, if the reset dimension is [Year]:

Clipboard02.png

Similarly, with [Quarter], in the footer of each column:

Clipboard03.png

In previous versions, the running sum in the footer of the other axis would give unpredictable results. Typically, with a reset on [Year] in both the row and column footers, the result in the column footers would be meaningless:

Clipboard04.png

Versions where this wrong behavior has been corrected:

  • XI 3.1 since SP6
  • 4.0 since SP4
  • 4.1

 

Date Functions

LastDayOfWeek() uses Monday as first day of week

To respect the ISO 8601 standard, and to be consistent with the DayNumberOfWeek() function, the LastDayOfWeek() function now considers Monday as the first day of the week instead of Sunday.


Example:

XI R2:  LastDayOfWeek(todate(“05/11/2005”;”MM/dd/yyyy”)) returns 14 May 2005 (Saturday),

XI 3.1: LastDayOfWeek(todate(“05/11/2005”;”MM/dd/yyyy”)) returns 15 May 2005 (Sunday).

 

Migration:

To keep the original behavior, use the RelativeDate() function:

RelativeDate(LastDayOfWeek(todate(“05/11/2005”;”MM/dd/yyyy”)),-1) returns 14 May 2005 (Saturday).


Wrong time zone for formula with “CurrentDate” and a date field

The time zone of the server was applied to the “CurrentDate” evaluation (instead of UTC) when used with another date field in a formula. It is now evaluated in the UTC time zone.

 

“Week” function

The function “Week” was returning an incorrect number for when the last day of a leap year is a Monday. (This situation occurs every 28 years).


Before update: Week # of Monday December the 31th of 2012 = 53

After update: Week # of Monday December the 31th of 2012 = 1

 

“MonthsBetween” function

A set of days over two months was considered a month if the starting day # < ending day # of ending date. This was not working for months ending with day 30 (29/28) compared to a month ending with day 31.


(4.1 SP1 to come) A set of days over two months is now considered a month if the starting day # <=  ending day # and if ending day # is the end of the month and the starting day # > ending day #.


Before fix: MonthsBetween(31/03/2008 , 30/04/2008) =  0

After fix: MonthsBetween(31/03/2008 , 30/04/2008) = 1


Merged Objects

Aggregation functions return correct values for original dimensions inside merged dimensions

Prior to XI R2 SP06, Web Intelligence did not return a correct result in the body of a table when aggregating an original dimension that participates in a merged dimension. (Note that the result is correct when the related dimension is in the table or in a free standing cell).


In the example below, depending on the query , the number of resorts is different. When asked for a count of the resorts from query 1 or 2, Web Intelligence returns the total number of resorts for the merged object instead of the individual object.

5_1_a.png

After SP03, the system returns the correct count for the queried objects.

5_1_b.png  

Aggregation functions can process individual objects inside a merged object

The aggregation function (e.g: Count, Min, Max) applied to an object [A] participating in a merged object, was processed on the value set of the merged object instead of the given object [A]. It is now processed on the original object [A] value set.


Document migration:

To get the previous behavior, you can replace the original object by the merged object.

 

From BI 4.1 SP03 (patch2 or upper required), to ensure that you receive results for this formula that correspond to the previous version, the system automatically rewrites the formula using an ad-hoc function “useMerged” with the aggregation expression as a parameter to force the use of the merged dimension. This is available on request on BI 4.1 SP03 for reports created with earlier version of XI 3.1 SP03.2. For more details, refer to the following section:Automatic formula rewrite.

 

Aggregation on a variable based on individual objects inside a merged object

An aggregation on a variable object whose formula is based on an object [A] that is participating in a merged object, was processed based on the merged object instead of the given object [A]. The aggregation is now processed according to the given object [A].


Migration:

To get the previous behavior, replace the original object with the merged object.



Aggregation in free cells of an object participating to a merged object, combined with the Where() operator

 

In free cells, the aggregation function (e.g.: Count, Min, Max) applied to an object [A] participating to a merged object was processed on the value set of the merged object instead of the given object [A], when the context of this aggregation was modified by the Where() operator.

 

Workflow example:

  1. We have a first query “Query1” giving a single value for the [Year] dimension and a second query giving two other values for the same dimension.
  2. When in a table, the formula =Count([Query1].[Year]) Where([Query1].[Quarter]=”Q1”) would return 1, which is the correct result.
  3. When in a free cell, the same formula would return 3, which is the result of the merged [Year] dimension (the single value from Query1 + the two values from the second query).

 

This behavior was found in WebI XI 3.1 SP1 and was corrected in XI 3.1 SP2.

 

A regression was found in the following versions, when the “Extend merged dimension values” document setting was activated:

  • XI 3.1 SP5 FP5.6
  • XI 3.1 SP6 FP6.3 to FP6.5
  • XI 3.1 SP7

This regression was corrected on the same branches, in later patches.

 

To get the previous behavior, replace the object with the merged object.

 

Aggregation of Merged Data from Business Warehouse (BW)

 

Data fetched from a BW data source have a unique key allowing data with similar values to be treated as different.

 

In earlier versions of WebI 4.0, this key was wrongly managed when the data was merged, resulting in spurious rows when in a table, such as in the example below.

 

Example with [Region] as the merged dimension:

img1.png

Since WebI 4.0 SP5 patch 5, this issue has been corrected. The keys are correctly managed and the above table will show the properly aggregated data with no additional rows:

img2.png

Versions where this issue has been fixed:

  • 4.0 SP5 patch 5
  • 4.0 since SP6
  • 4.1 since RTM

 

Merged dimensions combined with dimension objects


When using in the same table a merged dimension and an object participating to that merged dimension, Web intelligence 4.0 will perform an intersection of the values coming from the merged dimension and the values coming from the participating object.


Example: We have two queries, each of them returning a year dimension, which are merged together:

Image1.png

When using the merged year with the year from the 1st query, the intersection of the two objects results in the values 2004 and 2005, while with the year from the 2nd query, the intersection of the two objects results in the values 2005 and 2006:

Image2.png

In version 4.1, this behavior has been modified and Web Intelligence will perform a union instead of an intersection of the values. This new behavior has been implemented to comply with the general behavior of Web Intelligence regarding the use of merged dimensions, where the merged dimension always take precedence over any object participating to that merge, thus showing all values from the merged object.


This new behavior results in the same list of values whatever the query where the object comes from. For instance, in the above example, this will result in the values 2004, 2005 and 2006 whether the year object comes from the 1st or the 2nd query:

Image3.png

Versions where this behavior has changed:

  • XI 3.1 since SP4 patch 3, SP5 patch 3 and SP6
  • 4.0 since SP5 patch 15, SP6 patch 10, SP7 patch 6, SP8 patch 1 and SP9
  • 4.1 since SP1 patch 5, SP2 patch 1 and SP3

 

Data Ranking

 

“Ranked by” option using a dimension which is not in the table

 

Up until 4.0 SP07, a dimension used in the “Ranked by” option of the Ranking functionality is always taken into account, even when this dimension is not part of the table where the ranking is applied.

 

Example: Ranking the top 2 [Quantity sold] by [Store name]:

  

Year

State

Store name

Quantity sold

2005

New York

e-Fashion New York Magnolia

9,990

2006

New York

e-Fashion New York Magnolia

11,651

2005

California

e-Fashion Los Angeles

9,792

2006

California

e-Fashion Los Angeles

9,869

 

Behavior until 4.0 SP07: if [Store name] is not part of the table, this will not modify the ranking:

 

Year

State

Quantity sold

2005

New York

9,990

2006

New York

11,651

2005

California

9,792

2006

California

9,869

 

Starting from 4.0 SP07, if [Store name] is not part of the table, then the “Ranked by” option is ignored and we therefore get a different ranking. Note that, in this particular case, the aggregated measures are not sorted ([Quantity sold]):

 

Year

State

Quantity sold

2006

California

17,769

2006

New York

19,109

 

 

This behavior change can be found into the following versions:

 

  • In BI 4.0:
    • SP07, since Patch 7
    • SP08, since Patch 3
    • SP09, since Patch 1
    • SP10 and all patches
  • In BI 4.1:
    • SP03, up to Patch 6
    • SP04, up to Patch 3
    • SP05

 

Starting from 4.1 SP03 Patch 7, 4.1 SP04 Patch 4 and 4.1 SP05 Patch 1, we are reverting to the original behavior (prior to version 4.0 SP07), i.e.: whether or not the dimension used in the "Ranked-by" option is part of the table, this will modify the ranking of the table.


Ranking data by a dimension, in sections


In Web Intelligence 4.0 prior to SP11, ranked measures were not properly sorted when the data was within a section and ranked by a dimension.

 

For example: top 3 [Sales revenue] ranked by [State] in the [Year] section:

Wrong sort in section.png

When a measure is ranked by a dimension, the sort expression is: =[M] in ([D]), where [M] is the measure and [D] is the dimension it is ranked by.

 

If in addition the data is within a (sub-)section, then the sort expression becomes: =[M] in ([D], section1; section2, …etc.), where section1, section2, etc. are the expressions of the sections containing the data block. This is the sort expression which has been fixed and which now gives a correct behavior:

Good sort in section.png

The behavior modification can be found into the following versions:

  • In BI 4.0, starting from SP11
  • In BI 4.1, since SP03 Patch 9, SP04 Patch 7, SP05 Patch 2 and later


Note that there is no behavior modification when there is no ranked by dimension defined for the ranking.


 

Other Functions and Calculation Changes

Previous() in a cross-table no longer returns values for the first column.

In prior versions, the Previous() function carried the last value in a row over to the first value of the next row in a cross-table.  This behavior was confusing because there was often no link between the last column of one row and the first column of the next.


In the following example, using XI 3.0, the first column in the second row returns the last column in the first row, even though there is no link between France and US.

3_2_a.png

 

In XI 3.1, Web Intelligence no longer returns a previous revenue for US in 2004 (since there is none available for that report).

3_2_b.png

This change is also applicable when you use Previous with the COL keyword. In this case the last value in a column is not carried over as the first value of the next column.

 

Measures will ignore incompatible dimensions

Prior to XI R2 SP03, a measure in a table returned an empty value when the table contained an invalid dimension present in the section header.


In the example below, Year and Country are incompatible:

  4_1_a.png

After XI R2 SP03, Web Intelligence returns the measure value calculated using the compatible dimensions. In the example below, Revenue is calculated by Country:

4_1_b.png

 


"If" expressions return the same values for formulas and variables referencing formulas

The sum of a formula containing an "If" expression will now return the same result as a variable referring to an identical formula.


As shown in the following table, in XI R2, the sum for the formula if([Year]=”2002”;1;0) returns the sum of the visible values, whereas the sum of the variable referring to the same formula (MyVarIf) returns the sum of the multiple occurrences of the underlying data (which are hidden).

   2_2_a.png

If you deselect the “Avoid duplicate row aggregation” option, you can see the duplicated data.

 

2_2_b.png

In XI R3 and subsequent releases, the system returns the same result for the variable and the formula.

   2_2_c.png


UNV vs. UNX Count projection function


When creating a universe in Information Design Tool (IDT) or Universe Design Tool (UDT), each measure object can have its own projection function. The projection function is the default aggregation used by the Web Intelligence calculation engine when consuming a measure in a block. The projection function can be a sum (by default), a count, a min, a max, or it can be delegated to the data source. The projection function can also be set to “none”, in which case the Web Intelligence calculation engine will process the measure as a dimension (aggregation by identical values).


The “Count” projection function counts the occurrences of each unique value in the list of values of a measure. But it is processed differently in the Web Intelligence calculation engine, depending on whether the measure comes from a UNV or a UNX universe:

  • If the measure comes from a UNV universe, the count aggregation will not take into account the empty values of that measure
  • If the measure comes from a UNX universe, the count aggregation will take into account its empty values


As a result, if a UNV universe is exported as a UNX universe, a Web Intelligence document built with that universe as a data source might show different results before and after the export operation, if one of its measure objects is using a count projection function.


In a future version of Web Intelligence and IDT, it will be possible to choose between the two count projection functions: count with or without empty values.

 

Versions where this behavior is observed:

  • Since 4.0 (when UNX universes were released for the first time)



Automatic formula rewrite mechanism

 

Web Intelligence provides an Automatic Formula Rewrite mechanism that automatically modifies a selection of formulas (see list below) in a document. The formulas that follow a certain pattern are modified when you open a document migrated from a previous version (see above for a list of the applicable versions). After modification the formula returns the same result than before the calculation change.

 

We then recommend that you save the document so that the modifications are stored in the document, thus completing the formula rewrite mechanism.

 

The Automatic Formula Rewrite mechanism is available by default for documents migrated to BI 4.1 SP03 for the following formula pattern:

  • “where with dim as parameter in condition”
  • “running calculation reset on section”

BI 4.1 SP03 (patch2 required)

  • “merged object in aggregation function”


BI 4.1SP03 patch3:

  • “running calculation in column”

 

The releases that apply for this solution are specified above in the sections.

 

Automatic formula rewrite mechanism rules


The rules to automatically modify the formulas are stored in an XML file called "Formula_migration_rules.xml", located in the [installation directory]\[SAP BusinessObjects Version]\[OS]_[PLATEFORM]\config folder.

 

For example, on Microsoft Windows:

  •       Web Intelligence server: (64bits): C:\Program Files (x86)\SAP  BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win64_x64\config
  •       Web Intelligence Rich Client (32 bits): C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI4.0\win32_x86\config

 

BEWARE!!!


Modifying this file may have an unexpected impact on all of your Web Intelligence documents. In particular if you enable the "force" attribute, the formulas in your documents may be rewritten and introduce behaviors and results that you did not expect.


You should never use the "force" attribute for all of your documents.  Use it only for specific documents. In order to do this you should enable the "force" attribute, open the document, save it and then disable the "force" attribute immediately afterwards.

 

Note: If you modify the XML file, then you need to restart the server or the application to apply the changes.

 

The XML file has the following content:

<Rules>

  <Rule name="ExtractPlainDimFromWhereCond" enable="true" force="false">

    <!--List of version where the behavior changed-->

    <Version value="12.3.6.1006"/>   <!-- Titan XI3.1 SP3 FP06 -->

    <Version value="12.4.1.1188"/>   <!-- Titan XI3.1 SP4 FP01 -->

    <Version value="12.5.1.1357"/>   <!-- Titan XI3.1 SP5 FP01 -->

    <Version value="14.0.5.882"/>    <!-- 4.0 SP5 RTM -->

  </Rule>

  <Rule name="ResetOnSectionForCumulative" enable="true" force="false">

    <!--List of version where the behavior changed-->

    <Version value="11.5.10.0"/>

  </Rule>

  <Rule name="UseMergeDimInAgg" enable="true" force="false">

    <!--List of version where the behavior changed-->

    <Version value="12.3.2.0"/>

  </Rule>

  <Rule name="UseColForCumulativeOnXTabBody" enable="true" force="false">

    <!--List of version where the behavior changed-->

    <Version value="12.x.x.x"/> <!-- All XI3.x versions -->

    <Version minvalue="14.0.2.798" maxvalue="14.0.2.846"/>

    <Version minvalue="14.0.5.882" maxvalue="14.0.5.1249"/>

    <Version minvalue="14.0.6.1036" maxvalue="14.0.6.1145"/>

    <Version minvalue="14.0.7.1147" maxvalue="14.0.7.1147"/>

    <Version minvalue="14.1.0.896" maxvalue="14.1.0.896"/>

    <Version minvalue="14.1.1.1036" maxvalue="14.1.1.1072"/>

  </Rule>

</Rules>


Where:

  • enable=“true” means that the rule is applied, depending on the document version.
  • force="true" means that the rule is applied, regardless of the document version.

How to get most recent record in Webi report

$
0
0

Hi,

 

This document will be helpful in  a scenario  where we have multiple records with different dates but dimension is same /Getting most recent record in webi using date.

 

Sample data:

 

 

s1.PNG

 

Here  Emp no and 107 has Different date of joins with same emp name and Sal .

 

By using Rank function we can achieve  most recent record  by  creating  variable as below .

 

s2.PNG

 

 

If you observe Rank has shown 1 ,2 ,3 for 3 different records for Emp 107 as below.

 

s3.PNG

 

Now apply filter on rank with 1 then it will Table/output will display only one record with most recent data.

 

s4.PNG

Thanks.

How to perform actions in Webi Intelligence through Javascript

$
0
0

this document will collect scripts that performs actions (click an icon, disable elements and so on) in a WEBI doc, through Javascript.

In order any of the scripts to work, we´ll need to set the Webi preference (under BI Launch Pad view/modify to HTML.

Instructions

1 - Drop a blank cell onto the report,]

2 - Set the cell property Read As to HTML

3 - Copy and paste the text of the script (from <script> to </script>, included ) to the cells formula

 

The first two will be :

1 - Collapse the Left Panel

     <script>

     self.top.window[2].window[0]._widgets[77].clickCB();

   </script>


2 - Disable the Save button

     <script>

     self.top.window[2].window[0]._widgets[282].setDisabled(true)

   </script>


3 - Reseting ALL Input Controls in a Report

   <script>

     self.top.window[2].window[0].window[5].iFormResetAllCB()

   </script>

   

Comments and suggestions about actions to be performed will be highly appreciated.


Regards,

Rogerio

How to require a response to one and only of a set optional parameters

$
0
0

Earlier today I responded to the following discussion question...

 

Does anyone know how to create a webi prompts where the user must choose one or the other?

 

I have wanted to do this before, but I had never really dug into it. However, I had some time today so I decided to give it a try.

 

It is moderately complex and certainly not without its drawbacks which I will discuss at the end.

 

There are five basic steps...

  1. Remove the criteria which involves the optional prompts from the primary query.
  2. Create a secondary query with "dummy prompts".
  3. Use custom query script on the secondary query.
  4. Create variables in the report to capture the responses to the dummy prompts.
  5. Create a variable related to those dummy prompt responses and filter on it.

 

In my example I am going to use a universe of mine that has Accounts and look at the Account Open Date and Account Closed Date dimensions against which I will in effect create optional parameters. The attached document has related screen shots.


  1. Remove the criteria which involves the optional prompts from the primary query. I need to keep those dimension as Result Objects because I will need to filter on them in the report.



  2. Next create a secondary query with "dummy prompts". It doesn't really matter what universe I use because I don't want to return any data anyway; we are just going to use the responses to the prompts. I actually just duplicated primary query and then added some criteria to ensure that I get no data. In my case that is Branch Number Less than 0. I then added two fields to prompt on. One could be either my Account Open Date or Account Closed Date. The other can be any dimension that is a string. I chose "Equal to" as the operator for both of them and make them prompts.


  3. Now switch to "Use custom query script" just for this secondary query with the dummy prompts. I have to keep in mind that if I am using custom query script and I edit my query in any way my custom query script will be thrown away without warning and replaced by newly regenerated query script. I can always go make my changes again, but I need to remember to do that.

    To switch to use custom query script click on the "View Script" icon at the top of the Query Panel and choose the "Use custom query script" radio button. For the first prompt that corresponds to whatever date field you chose change the prompt text in first parameter of the @prompt function to make it generic. I made mine 'Enter Date:'...

    @prompt('Enter Date:','D','Account Attributes\Account Open Date',Mono,Free,Not_Persistent,,User:0)

    The parameters of the prompt function corresponding to the string field requires a little bit more modification. I need to change the prompt text again; I made mine 'Enter Date Dimension:'. The key here is to put the date dimension choices in the list of available values parameter. So instead of pulling the possible values from the underlying data I made mine {'Open Date','Closed Date'}.

    @prompt('Enter Date Dimension:','A',{'Open Date','Closed Date'},Mono,Constrained,Not_Persistent,,User:1)



    So I have created a secondary query that will return no data, but will prompt for a date value and to which date that value should apply.



    I hit "Run Queries". The query with the dummy prompts returns no data as expected.

  4. Then create two variables to capture the response to the dummy prompts. Remember the UserResponse function always returns a string. My Date Prompt Response variable formula to capture the date looks like this...

    =ToDate(UserResponse("Enter Date:"); "M/d/yyyy hh:mm:ss a")



    And my Date Dimension Prompt Response variable formula to capture to which dimension that date should be applied looks like this...

    =UserResponse("Enter Date Dimension:")



  5. Finally, create a variable based on those two variables which determines which date to filter on and whether the date entered matches. Here is my Date Comparison Flag variable formula...

    =If([Date Dimension Prompt Response]="Open Date"; If([Date Prompt Response]=[Account Open Date];1; 0); If([Date Prompt Response]=[Account Closed Date];1;0))



    Add a filter where Date Comparison Flag Equal to 1.

 

That's it!



I fully acknowledge there are a number of concerns about this approach. Here are a few of mine, you may have more...


  1. The data is filtered in the report rather than the query resulting in potentially returning a lot more data than necessary and negatively impacting the performance of the report.
  2. Using custom query script may be disabled in your organization. Also, if the query is edited in any way the query script gets regenerated and the customization is abandoned.
  3. Although in effect this solution forces you to pick one and only one of two optional value they are not optional parameters making this potentially logically misleading.
  4. I have not tried this approach with choose one value among a set of optional parameters where the values are not of the same data type. I think it could still be made to work, but I am not sure.

 

I know this is not a perfect solution, but I hope you can make it work for your situation or it give you an idea of something else to try.

 

Noel

Sometimes you just have to start over...

$
0
0

Yesterday a built a fairly basic WebI report with two queries. In one of the queries I forced it to return no data (e.g. where 1 = 0) and used custom query script to create what I refer to as dummy prompts whose values I would use to specify one of three possible sort orders among other things. Everything worked great within WebI, but when I tried to schedule it I ran into problems.

 

When I tried to change the default parameter values I was allowed to do so, but when I clicked "Apply" the parameter dialog box would just pop right up again with the same default parameter values rather than what I just entered. It would do this again and again. The only way out was to click "Cancel". I tried purging the data in the report along with all parameter values. It still didn't work, same result.

 

About this time I realized I could probably run the report in WebI with whatever parameter values I wanted, save it and schedule it. And again. I actually need to schedule this report with 7 different sets of parameter values. While I am pretty sure that would have worked, I shouldn't have to. I should also mentioned that occasionally when attempting to schedule this report some of the current parameters would not be presented for values and an older parameter (from a previously saved iteration) that I had since deleted from the report would be presented for a value. One more thing, a coworker of mine at a different location was able to schedule this report just fine. I wanted to SCREAM!!!

 

We are running 4.1 SP01 Patch 2 in a multi-tiered load balanced environment. So we have two web servers and two application servers. So we suspected that he was getting a different server than I was and that was why it was working for him and not me. We decided to reboot the entire system (all four servers) last night after hours in the hopes that would fix it. No luck.

 

I decided to deconstruct and rebuild my report. I completely deleted the query with the custom query script and dummy prompts, saved the report and rebuilt that query and saved it again. It worked perfectly in WebI. However, now when I tried to schedule and update the parameter values I got an error...

 

Not on correct event type; expected START_ELEMENT, but was END_ELEMENT. (Error: INF )

 

I did some Googling and found nothing conclusive.

 

Up to this point I had been using the Java applet version of WebI (with Java 7.40) within Chrome. I tried loading the report and saving it from the HTML and Rich Desktop Client versions of WebI and continued to get the same error when trying to set the parameter values in the schedule.

 

My coworker suggested the report must have become corrupted somehow. Using the Rich Desktop Client I started over and rebuilt the report from scratch making it identical to the original including the query with the custom query script and dummy prompts. Now I am able to set the parameter values in the schedule as expected. I wish I knew what was wrong with the original, but I really don't. Thanks to my colleague's advice I have a working report and I am moving on.

 

Noel

current month in year and prior current month in year(last year currnt month)

$
0
0

Hi Developer,

 

This is Praveen kumar.I am having one question

 

Please provide the answer about create variable for current month in year

 

and current month in last year ....................


report schedule failures

$
0
0

Below are some of the report schedule failure and their resolution.

Below issues were found in BOXI 3  with back end as BEx queries. Below are some of the most repeated Webi report schedule failures.

 

1)Object could not be scheduled within the specified time interval.

 

1.png

Cause:

Whenever we create schedule we specify the schedule start and end date. When the specified time period has been expired, this issue occurs.

Resolution:

We need  to create a new schedule for the specific report extending the expiry date.

 

2) Address Error:


2.png

Cause: This error is caused when the email id is wrongly given while scheduling or semicolon is not given between each email ids.

Resolution: Validate the specified email ids in the schedules.


3) Invalid Prompt identifier

3.png

Resolution: This issue happens when prompt values are not defined properly.  Once scenario is that the prompt value that was used when scheduling is no more exist.


4)Web Intelligence server cannot be  reached


4.png

Resolution:

Recurring schedule request has been failed to refresh in specified time period due to network related issue (or) stack queue might be overloaded.


5) Unable to connect to SAP BW server System


5.1.png

5.2.png

Resolution: Check the process chains would be failed.

Universe connection for respective report needs to be verified

Everything is validated then refresh the schedule once gain.


6) While processing the Job servers

6.png

Resolution: Check the BOBJ servers are running stage (or) Stop status.



I hope these issues and their resolutions will help some of you.



Dimensions as Input Control- Dynamic Control

$
0
0

I am not sure whether this topic has been covered or not for Webi Reports. I checked and did not found any links to it hence mentioning it in here.

 

I had a requirement of a report showing revenue and margin based on some dimensions. What I wanted was to have those dimensions as input control in the report.

Like we have the values of certain dimensions as input control, instead I succeeded in getting number of dimensions as input control.

Below are the steps to achieve the same.

 

  • Create a WebI Report or use any existing report.
  • Create a New Variable as (Dimension) and hard code with some value.1.png
  • Now, Go to Input Controls and click on New to create an Input Control.
  • In the properties tab check on Radio Buttons in the Simple Selection Pane:

       2.png

  • Then select the tab (marked in red) in the List of Values and make sure “Custom” is selected

         3.png

  • Once you have selected the tab you will get another window where you can put in all the dimensions which you would like to be available for your selection.

             4.png

  • Once you have all the dimensions selected please select Ok twice as the properties pane also needs to be closed.
  • Once you have done this you will see the Input Control Available with the dimensions, but when you check the radio button it would not work as these are hard coded values.

               5.png

  • To make these values work we need to create one more variable and give it a name as “Selected Dimension” or any other name that suits.

                   6.png

  • In this we need to write a formula which would help the Input Controls to work:

               =If ReportFilter([Select a Dimension]) = "Sales Office" Then [Sales Office]

               ElseIf  ReportFilter([Select a Dimension]) = "Division" Then [Division]

               ElseIf ReportFilter([Select a Dimension]) = "MR Number" Then [MR Number]

               ElseIf  ReportFilter([Select a Dimension]) = "Material Group" Then [Material Group]

               Else 0

  • Once the variable is created drag it to the report body and then try to check the Input Control radio buttons and they should work for this column and the measures would also change based on your selection.

 

           

 

Remember to remove all the dimensions from the report body which you have used in the Input variable or else it would get confusing.

 

Now you have the input variables working for you:

Selected below is “Sales Office”

          7.png

“Division” is selected:

          8.png

 

 

To ensure the header name also changes along with the selection:

 

  • Create a variable say “Name” and write a formula again:

               =If ReportFilter([Select a Dimension]) = "Sales Office" Then "Sales Office"

               ElseIf ReportFilter([Select a Dimension]) = "Division" Then "Division"

               ElseIf ReportFilter([Select a Dimension]) = "MR Number" Then "MR Number"

 

 

  • Copy the formula and paste it in the header of the field which you have dragged to the report i.e. the “selected variables”

             9.png

Once this is done whenever you would select a dimension from the Input Control the header name would change dynamically.

 

Note: Please ensure that spellings match while writing the formulas or else the values might not come properly.

 

 

Hope this helps.

TABLE MAPPING FEATURE IN SAP BO

$
0
0

                                                         

 

Document

Table mapping feature in SAP BO 3.1

 

Description

This intends to help the user\developers to effectively use the table mapping feature which plays pivotal role in creating same report with different data depend in different user groups

     

Table of Contents

  1. 1.. Document Objective. 3
  2. 2.. Table Mapping Feature. 3
  3. 3.. Case Study. 3
  4. 3.1    Application Background. 3
  5. 3.2    Database part 4
  6. 3.3    BO Solution. 5
  7. 4.. Advantages. 16

 

 

 

1.    Document Object

 

This paper is written to highlight the importance of the ’Table Mapping’ feature in SAP BO Universe designer. It is observed that most of the developers don’t use this feature as efficiently as it could have been in their projects. This paper is an attempt to throw more light on this feature which can enable BO developers to fit this function into their applications wherever applicable.

 

2.    Table Mapping Feature

The main functionality of this feature is to provide different data set for different user without making any changes to the underlying universe. To put it simple, the base table for the data selection could be dynamically changed based on the user group while the universe and webi report remain same. It is imperative that the table structure should be same for the base tables that will be changed dynamically.

 

This feature enables to create a single universe and webi report with different set of data based on the user group hence it plays an important role in keeping the reporting system very simple. The maintenance also becomes easier when there is an enhancement to the report, as the change would be enough at only one place.

3. Case Study

The case study would help to understand this feature better.

3.1 Application Background

 

Let us consider a Sales& Marketing/Banking/Finance applications. It is an international  and data is loaded into the database system from different regions of the globe on daily basis.

 

This daily data is the base for sales calculations. So it is very important to validate the data by  users before they get loaded into the data-sensitive fact tables for calculations.

 

Let us say there are 12 different applications (regions) from where the data is loaded on daily basis –India,Malaysia, German, Singapore, China, UK, US, Russia, Finland, Denmark, Sweden and Norway. There are 12 separate staging tables to receive the data from these applications.

 

The database system receives the data by 10 AM every morning and the business users run the report by 9.30 AM to validate the data. Once the sign off is received from business the data is loaded into fact tables and business critical calculation is started.

 

The aim of this whitepaper is to create a single universe and report which fetches different data set based on the different user groups.

 

3.2 Database part

 

The following staging tables are created in the database, which receives the base data from 12 different applications.

 

  • STG_DAILY_DATA_INDIA
  • STG_DAILY_DATA_MALAYSIA
  • STG_DAILY_DATA_GERMAN
  • STG_DAILY_DATA_SINGAPORE
  • STG_DAILY_DATA_CHINA
  • STG_DAILY_DATA_UK
  • STG_DAILY_DATA_US
  • STG_DAILY_DATA_RUSSIA
  • STG_DAILY_DATA_FINLAND
  • STG_DAILY_DATA_SWEDEN
  • STG_DAILY_DATA_DENMARK
  • STG_DAILY_DATA_NORWAY

 

The table structure is same for all the tables.

 

CREATETABLE STG_DAILY_DATA_INDIA

(

customer_idNUMBER,

periodNUMBER,

Product_typeVARCHAR2(50),

corporate_typeVARCHAR2(10),

currencyVARCHAR2(10),

amountNUMBER

);

 

The sample data is for INDIA

 

CUSTOMER_ID

PERIOD

PRODUCT_TYPE

CORPORATE_TYPE

CURRENCY

AMOUNT

100

20150601

DEPOSIT

SME

INR

10000

200

20150601

LOAN

SME

INR

20000

300

20150601

DEPOSIT

LARGE

INR

30000

400

20150601

LOAN

LARGE

INR

40000

500

20150601

DEPOSIT

MEDIUM

INR

50000

600

20150601

LOAN

MEDIUM

INR

60000

 

The business user verifies the report at product, corporate and asset_liability level before loading the signed off data into production fact tables

 

3.3 BO Solution

BO reports to be designed for each country to get the data at product, corporate and asset liability level so that the business users can validate the data before loading into application for critical calculations.

 

There are so many ways to achieve the solution.

  • Creating different universes for each table (12 universes) and one webi report for each country (12 different webi reports)
  • Creating one universe by including all the tables and 12
  • webi reports one for each country
  • Creating one universe for all the tables and one webi report for all the countries.

 

In this paper, we have been discussing about the solution 3, which helps us to avoid complexities by making the system very simple for maintenance

 

 

                 

 

 

3.3.1    Final Report

 

The final report that the business user wants to see is as below.

 

 

 

t3.PNG

 

Note:

 

   To collect data from multiple universes then get complexities

                               (OR)

  One operation report lots of vertical tables, chart, which have lot of complexities in formating,  i archive with relative position between cross tab, vertical & chart

 

 

 

 

3.3.2    Steps to create Universe

  1. Insert any one table that we created above

 

t4.PNG

 

 

2. Go to Tools-> Manage security -> Manage Access Restrictions

 

  t5.PNG

3. It opens the following screen. Click on NEW to create new restriction

 

  t6.PNG

 

 

 

 

 

 

4. Click on Table Mapping tab to select replacement table

 

  t7.PNG

 

5. Enter the original and replacement tables and clickOK

 

 

 

t8.PNG

 

 

  6. The new restriction rule is created as below

 

 

 

 

 

t9.PNG

 

 

 

 

 

 

   7. Name the restriction rule and Click OK

 

t10.PNG

 

 

   8. Mapping rule is now created successfully. To apply this mapping rule with respective business user click on ‘Add user or group’

 

 

t11.PNG

 

 

    9. The screen opens for user selection.

 

 

 

t12.PNG

 

 

     10.   Select the user and click on >  and then click on OK

 

t13.PNG

 

 

     11.   Click on Apply and OK to apply the mapping rule to the particular user.

 

 

t14.PNG

 

 

     12.  Click on Preview in the previous step to preview the newly created mapping.

 

 

t15.PNG

 

 

3.3.3    Webi Report with Dynamic changes

The same report fetches different set of data based on the user who executes the report.

 

 

t16.PNG

 

 

4.    Advantages

  1. 1.    Complexity of the system is eliminated by creating just a single report which serves multiple purposes. Single report displays different data sets depending on the user groups thus by creating multiple reports is avoided.
  2. 2.    Data security is attained as the sensitive data is protected based on user group.
  3. 3.    This feature will play pivotal role in the stand point of enhancement. The enhancement can be done at only one place and the change is reflected to all users thus saving lot of manual effort and time.

Hiding Unwanted information/data in Webi Reports

$
0
0

Hi

 

This document will focus mainly Hiding unwanted data with relevant LOV in Webi report. There are scenarios where we need to hide objects/blocks/charts/rows/columns based on business needs. This document will cover most of the scenarios.


Sample data used in all scenarios :

h1.png


Below are the list of scenarios which are going to cover in this document .


1.Hiding Dimension Object

2.Hiding Rows/columns when dim/measure value is null/empty

3.Hidng measure value which contain 0 as value

4.Hiding  duplicate records

5.Hiding data based on most  recent record in Webi

6. Hiding data based on user response

7.Hiding data based on condition

8.Hiding data using input control

9.Hiding section


1.Hiding Dim object :


To hide dim object selct particular dim object like here emp name and right click on selected object and you will get list of options as below and there you will see hide option.


h2.png


Now select Hide dim option then emp name will be hidden in report .If you wan to show it again then right click on table/block then you will get show hidden dim option selct show hidden dim.It will display all hidden dim in report.


Note : when you create  a new name for  a column i.e custom header and it isn't attached to the dim .Therefore , when you hide dim the column and show it again it will revert to the old name .If you want to show a new name and have it persist , you will need to create a variable.


Ex: In the above table If we rename emp name as Emp First name as header  and we hide this and again If I show then it will display emp name instead of Emp First name .



2. Hiding row/column when dim /measure value is null/empty.

There are 2 approaches for this  a) Using variable b)Changing default standard settings.


a) Using variable


Create variable in report as v1 =if(isnull([sal] then "1" else"0"


Now restrict table/block with created variable as "1"  then it will hide empty/null values of sal in report .

 

h3.png

 

 

h4.png


For dimensions also same will be applicable .


v2 = If (isnull(Emp name])) then "1" else"0"


b) Changing default setting of table /block :


Right click on created table/block then you will see format table option and in General tab you have to change settings as below .

 

h5.png

 

 


Here by default show rows with empty all measure values will be enabled just disable this option then empty measure values will be hidden in the report.


3. Hiding measure objects which contains "0" as value.


 

By default for  a table/block   as shown in above image show rows for which all measure values =0& show rows for which the  sum of measure values =0  these 2 will be enabled just disable  then it will hide measure objects which contains "0"


Ex: Here in above table for ps emp it has 0 as measure value


h6.PNG


h7.PNG


4. Hiding duplicate records in webi


   How to get Unique /Distinct Records in Webi Reports



5.Hiding data based on most recent record using date dim


How to get most recent  record in Webi report


6.Hiding data based on user response


http://bihappyblog.com/2011/11/05/dynamic-visibility-in-webi/


7. Hiding data based on condition


http://dwbi.org/analysis/business-objects/67-conditional-column-hiding-in-bo-4-0


8. Hiding data using input control


http://http://www.agiledss.com/en/blog/sap-bo-40-webi-5-new-features-will-make-you-drool.html


9 Hiding section


Hiding Section in a Webi Report


I will  update more scenarios .


Any suggestions/comments are welcome.


Thanks

Seshu



















Upgradation Projects:-DESKI(3.0) to WEBI-4.1 Upgradation

$
0
0

Hi all,

       In this document I am sharing ideas based on the project I worked.The levels vary according to the complexities of involved Reports.

       There might be certain other levels of work involved as well.

                                        There would be various links for various environments like :- Production,Developments,QA etc.We need to test the existing set of Deski reports and change it to the formats of Webi-4.0 or 4.1 either Manually or by using the "Report Conversion Tools" or "Upgrade Management"Tools

 

We can explore the following things in these kind of projects:-

 

      1) Functionalities:- Breaks,Sections,Filters(Report level & Query Levels),Ranking,Sort,Formulaes and Variables,Input Controls,Hyperlinks.

 

      2)Various Predefined and Customized Independent cells(eg. Page No ,Refresh Date).

 

      3)Various Prompts,Ordering of Prompts

 

      4)Combined Queries and Subqueries ,Alerters.

 

      5)Scheduling of reports.

 

      6)Various types of Errors:- #MUTIVALUE ,#DIV,#DATASYNC,#CONTEXT,#RANK,#TOREFRESH,#ERROR etc.

 

      7) Reports involving Macros.

 

      8) Reports involving Complex Charts,Vertical/Horizontal tables,Crosstabs.

 

      The main task is to convert the reports into the upgraded version of the reporting tool,Fixing the data mismatch errors present in the webi report as    compared to deski,Fixing the Report format errors present the pdf formats(Webi 4.1 in this case).

Accordingly we need to test the reports,then Fix the reports followed by uploading everything to the production  environment.

 

General steps for testing(This is w.r.t our project,which may vary from project to projects and based on the clients,our project had strict restrictions)

 

    TESTING OF REPORTS:-

 

      We followed the following steps for Testing:-

 

        Rules:

                  No changes(even the slightest) should be made in the Public Folders.

                   Don't save anything in the "Deski reports"(in case if we make small changes for cross checking something in deski-like purging the data).

 

    1)Save the assigned set of reports to the -'Favorites Folders'.

 

    2)Open the report in 'Infoview',(Deski Report).Don't make any changes to it.The Deski report should be used only as reference for comparision.

 

    2)Open the  Webi-Report,refresh it,note the time taken,if takes consumes lot of time then we need to Schedule the reports.

 

    3)Check the prompts(if present, it will be automatically prompt window will be popped up),its order and compare with the order of prompts in the deski reports(we got to refresh the deski report as well).

 

    4)Check the display mode of the reports ,whether the report is in "Quick display mode" or 'Page mode" (compare with the corresponding Deski Report).

 

    5)Now,export the reports (both Deski and Webi) to the Excel and and pdf formats.

 

    6)Now using the the relevant tool(our case Excel Comparator or Merge tool)compare the excel reports of both Deski and Webi and check for the Data        validation.Check whether there are data mismatches,some errors,date format mismatch etc as directed by the senior developers to you.

 

    7)After this open the exported PDF reports (both of Deski and Webi) and check for the 'Format Verification'.Here we got to check whether there are any format mismatch as compared to Deski ( or do follow the steps as directed by Senior developers).

 

    8)Finally Open both the Deski and Webi reports , Purge the table data in both ,check for the Format mismatch or the Errors(if present).

 

  NOTE:- Please mention/update the facts,conclusions of each steps above in the "Excel Tracker"(used by team) along with.


   DEVELOPMENT:

           Basically,there would be very rare scope of developing the reports,The senior developers will resolve various kind errors involved(#errors) ,various macros related issues various formulas modification according to Webi standards,fixing the errors occured during purging of data etc.However there might be certain cases where the reports need to developed from the scratch(like reports involving macros,data is not properly populated in cases where reports are coming from multiple universes and different data sources etc).

              For Report fixing the steps varies across reports.For more information we could have a look over the following website:

 

                        https://dwbicastle.com/tag/webi-report-_-quick-display-mode/

         

             For Errors:

                         http://bobi.blog.com/2012/08/26/webi-report-errors/

 

 

Till then Happy Learning..................

 

How to avoid/solve 20% of your Webi Incidents ! - Fonts

$
0
0
Webi.pngArrow.pngfonts2.pngDuring the work with Web Intelligence reports you can face with different Font related issues.

 

Font related issues can be different:

  • Font doesn’t appear in the report
  • Question mark, or squares instead of letters
  • Font install/export issues in Web Intelligence
  • missing UTF-8/NLS_LANG variable due to the character encoding issue

 

What should you check?

 

1.     Check if you have any problems with another font types. (e.g. Arial Unicode MS)

 

2.     Re-Install the affected font on Server and Client machine

  • Download the font from internet.
    Somtimes you can find fonts with different versions as well, use the latest if possible.
  • Open ’Control panel → Fonts’
  • Copy the downloaded font into the ’Fonts’ folder

 

3.     Make sure, that you’ve modified the fontalias.xml on Server and Client:

  • Path for client:
  • Path for server:

 

Example entry in fontalias.xml file in case of the Code 128 barcode font:
Fonts_1.png

For PDF exporting the i18n.xml file needs to be edited, see example:

Font_2.png

Please always backup the files before editing, and restart Tomcat and SIA after you’re done.

 

4.     If the issue is environment specific:

  • Compare the font version
  • Check the registry entries for nls parameters
  • Compare the environment variables
  • For character encoding see the port the tomcat is running in the tomcat/conf/server.xml and make sure URIEncoding=”UTF-8” parameter is set.
  • copy a character into a free report cell from Word document, where font appears good

 

 

Helpful SAP KBAs:

 

How to become more professional?


See also:

  SAP KBA 1323216 - Where are the installation guides, what's new, fixed issues, deployment guides for SAP Analytics products?

How to avoid/solve 20% of your Webi Incidents ! - Webi Reports - Check with different Client Machine

$
0
0
Webi.pngArrow.pngWebiReports.pngArrow.pngCheckClientTools.png

To exclude any application incompatibilities, test and compare the product behavior on different client machines.

This easy troubleshooting test is recommended every time, when you are experiencing functional errors or miss-behavior in the BI Launchpad or the Webi Rich Client.

Function problems in your client machine can be caused by an upgrade of the Windows environment, database driver/application/browser/Java version changes, new application installation or even some small configuration changes: e.g. select/view/edit Webi document, functions and buttons became not clickable etc.


When should you do this check?


E.g. Login to BI Launchpad and select HTML mode.

Now you want to create a Webi document based on a universe, but you cannot select any available universe from the universe list. The universe list acts as an image, and the cursor is shown as a cross:

  • Test the same steps on other client machines.
  • In case the same steps are working on the other machine, compare e.g. the installed operating system, the language locale, the installed applications, the browser and Java version etc.
  • Please, contact your IT Administrator Team, to detect and fix the changes in your environment.

 

E.g. You want to use the BI Launch pad in Java Applet mode.

When you are opening the Webi document, the process is significantly slower e.g. takes 2 minutes.

  • Test the same steps on other client machines.
  • In case the same steps are working on the other machine, compare e.g. the installed BI Client tool version, the browser and Java version, Java configuration, the CPU settings etc.
  • Please, contact your IT Administrator Team, to detect and fix the changes in your environment.

 

E.g. You want to refresh a Webi document in Webi Rich Client 3-tier mode, but you are getting error 10901.

    • Test the same steps on other client machines.
    • In case the same steps are working on the other machine, compare e.g. the installed BI Client tool version, the installed database middleware etc.
    • Please, contact your IT Administrator Team, to detect and fix the changes in your environment.

     

    1. Login with your user profile to another client machine. (If possible, do it on 1-3 different machines.)

    2. Test the same steps.

      1. a. In case the same steps are working on the other machines, compare the possible differences like the operating system (OS) version, language locale, installed applications, internet browser version and configuration, Java version and configuration, installed middleware, BI client tool version etc. Please, contact you IT Administrator Team, to detect and fix the changes in your environment.
      2. b. In case the you are getting the same product behavior, further troubleshooting steps will be needed like test with BI administrator rights and Enterprise authentication, or test with cleared browser and Java cache, test in different browser version etc.

     

    How to become more professional?

     

    See also:


    How to avoid/solve 20% of your Webi Incidents ! - Webi Reports - Data Refresh by Scheduling

    $
    0
    0
    Webi.pngArrow.pngWebiReports.pngArrow.pngRefreshByScheduling.png

    Scheduling is the process of automatically running a Web Intelligence document at a specified time. Scheduling refreshes dynamic content or data in a Webi document, creates instances, and distributes the instances to users or stores them locally.

    An instance is a version of the Webi document that contains data from the time the document ran. You can view a list of instances in a Webi document’s history. If you have access rights to view Webi documents on demand, you can view and refresh any instance to retrieve the latest data from the data source. Scheduling and viewing instances ensures that Webi documents have the most up-to-date information available for viewing, printing, and distributing.


    When should you do this check?

     

    E.g. you want to have up to date Webi reports with large data or complex calculations without of using considerable amount of system resource:

      • Set the recurrence of the scheduling when the system usage is the lowest: e.g.  run object daily in a specific early morning period

      Report_Scheduling_1.png

      E.g. Make available the latest instance for every user in their BI Inbox.

        • Set the destination to BI Inbox and select the “Everyone” uset group as a recipients.

        Report_Scheduling_2.png

        E.g. you want to check the status of your scheduled instances:

          • Right click on Webi document – History – Status

          Report_Scheduling_3.png

          Report_Scheduling_4.png

          1. Login to BI Launch pad.

          2. On the Documents tab, right-click the object to schedule and select Schedule.

          3. In the Schedule dialog box, click a category in the navigation list, and then set options in that category for the object.

          4. Repeat this step for each category that you want to set scheduling options for.

          5. Click Schedule.

          6. The History dialog box appears, displaying your scheduled job as an instance with a status of Running.

           

          How to become more professional?

           

          See also:

          How to avoid/solve 20% of your Webi Incidents ! - Webi Reports - Re-export the Web Intelligence document to the repository

          $
          0
          0
          Webi.pngArrow.pngWebiReports.pngArrow.pngRe-ExportReport.png

          This easy troubleshooting test is recommended every time, when you are experiencing error with a specific Webi document.

          Note: you have to install Webi Rich Client to test these steps. The Webi Rich Client is available in the BI Client Tool installation pack.


          When should you do this check?

           

          E.g. Web Intelligence document gets corrupted and throws error at opening or editing. Other reports work without any error.

            • Open the report in Webi Rich Client - Save to Enterprise – Advanced – Save for all users/Remove document security - Save

            Report_Re-export_1.png

            How to become more professional?

             

             

            See also:

            How to avoid/solve 20% of your Webi Incidents ! - Webi Reports - Check/Test Webi preferences

            $
            0
            0
            Webi.pngArrow.pngWebiReports.pngArrow.pngCheckTestWebiPreferences.png

            In BI launch pad, you have the possibility to change your viewing and/or modification interface for Web Intelligence.

            The view interface is used for to perform basic viewing tasks and the design mode to modify a document.

            The modification interface is used by creating and/or editing Web Intelligence documents.

            The available interfaces for Web Intelligence are:


            1. HTML

            2. Applet

            3. Desktop

            4. PDF (only for view interface)


            When should you do this check?

             

            E.g. if you want to use HTML view mode, but Applet modify mode, set the following:

              • Preferences – Web Intelligence – View – HTML
              • Preferences – Web Intelligence – Modify (creating, editing and analyzing documents) – Applet

              Report_Webi_preferences_1.png

               

              1. Login to BI launch pad.

              2. On the header panel, click Preferences.

              3. In the Preferences dialog box, click Web Intelligence.

              4. Under View or Modify, choose a reading/modification interface:

                • Select HTML (no download required) to view documents over the Internet, without downloading components.
                • Select Applet (download required) to view documents with a Java applet that must be downloaded.
                • Select Desktop (Rich Client, Windows only, installation required) (installation required) to view documents with a desktop application that must be downloaded.

                Select this option if you plan to work offline occasionally.

                • NOTE: for View mode an additional option is available. Select PDF to view documents in PDF.

                5. Click Save & Close.

                 

                How to become more professional?

                • To learn more about Web Intelligence preferences, please, read the Business Intelligence Launch Pad User Guide for 4.1 SP6 (topic: 4.5 Web Intelligence preferences)
                • Source for BI 4.1 product guides

                 

                See also:

                How to avoid/solve 20% of your Webi Incidents ! - Webi Reports - Create Report Copy

                $
                0
                0
                Webi.pngArrow.pngWebiReports.pngArrow.pngCreateReportCopy.png

                To exclude e.g. cache errors for a specific Web Intelligence document, create a copy from the original report. This very easy troubleshooting test is recommended every time, when you are experiencing error with a specific Webi document.


                When should you do this check?

                 

                E.g. Web Intelligence document gets corrupted and returns error at opening or editing. Other reports work without any error.

                  • Select the corrupted Webi report – right click – Organize - Copy
                  • Select the a destination folder – right click – Organize – Paste
                  • With these steps you are creating a new entity with a new ID/CUID in the BI environment

                  Report_Create_a_copy_1.png

                  1.     Login in to BI Launch pad.

                  2.     Select the corrupted Webi report – right click – Organize - Copy

                  3.     Select the a destination folder – right click – Organize – Paste

                   

                   

                  How to become more professional?

                   

                  See also:

                  How to avoid/solve 20% of your Webi Incidents ! - Webi Reports - Refresh on Demand

                  $
                  0
                  0
                  Webi.pngArrow.pngWebiReports.pngArrow.pngRefreshOnDemand.png

                  Refreshing a Web Intelligence document enables you to view data on demand. However, refreshing may use a considerable amount of system resources.

                  It can be run manually by the user or by turning on the “Refresh on open” document setting.

                   

                  Note: The “Refresh on open” automatically refreshes the results in reports with the latest data from the database each time the document is opened. Data after the refresh is treated as new data because the refresh purges the document.

                   

                  Before you can refresh data in a Web Intelligence document, you must have refresh rights for the document, and the server must contain the data source information

                   

                   

                  1. View/Reading mode

                  If you want to refresh a Webi document in “Reading” mode:

                  • Click on “Refresh” icon in the Webi toolbar: this will refresh all the queries in the document.
                  • If you click on the arrow next to the “Refresh” icon, you can choose which report query should be refreshed.
                  • Open the Web Intelligence document.
                  • Click on the “Refresh” icon in the toolbar.

                   

                  Refresh_on_Demand_1.png

                  2. Modify/Design mode

                  If you want to refresh a Webi document in “Design” mode:

                  • Click on “Refresh” icon in the Webi toolbar: this will refresh all the queries and document.
                  • Open the Web Intelligence document.
                  • Click on the “Refresh” icon in the toolbar.

                   

                  Refresh_on_demand_2.png

                  3.   Refresh on open

                  If you want to turn on/off the “Refresh on open” document setting:

                  • Properties – Document – Document Summary – Options – Refresh on open
                  • Or use the quick keys: Ctrl + R
                  • Open the Web Intelligence document.
                  • Click on the “Refresh” icon in the toolbar.

                   

                  Refresh_on_demand_3.png

                  How to become more professional?

                   

                  See also:

                  Viewing all 244 articles
                  Browse latest View live


                  <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>