Performance Tuning WebCenter Portal and WebCenter Content for a High Concurrency Intranet – Part 1

Performance tuning is a critical part of any software system or application rollout and often times is overlooked. However, tuning should be addressed at all parts of the implementation cycle.  This includes installation and proper configuration of the software suite, following development best practices, and then load testing and tweaking the configuration before the rollout.

Oracle WebCenter Portal’s default installation instructions and settings do not properly address a company intranet that needs to support up to 10,000 concurrent users.   In fact, Fishbowl’s recent experience at a customer proved out that the default settings can barely support 200 concurrent users.  We have recently gone through the exercise of properly tuning the combination of WebCenter Portal and WebCenter Content to meet the requirement of 10,000 concurrent users.

Below is Part 1 of Fishbowl’s findings regarding Oracle WebCenter performance tuning. Click on through to read some of the highlights from that exercise.


First a quick background on how we setup this intranet to give some context around the tuning that was done.

  • Content driven site rendered as a single space in WebCenter Spaces PS4
  • User contribution in WebCenter Content (UCM) drives site navigation and page content
  • Site rendered using combination of 2 methods

(Shameless Plug: I will be diving into more technical details on the implementation above in Session 414 on Thursday, April 26th at 8:30 AM during Collaborate 2012 in Las Vegas. I will be discussing Fishbowl’s Intranet/Portal in a Box framework, which includes the performance tuning configurations covered in parts 1 and 2 of this blog post.)


The goal of tuning any web application, such as WebCenter, is to provide enhancements in 2 main areas:

  1. Page response time (make the business unit and business user happy)
  2. Server capacity (make IT happy)

We ran through a number of tweaks and changes in the course of the tuning but I wanted to highlight 3 things in particular.

  1. Caching (part 1)
    1. Application caching
    2. Portlet response caching
  2. Web server settings (part 2)
  3. Java Virtual Machine (JVM) tuning (part 2)

This post will focus on caching.  Part 2 will focus on web server settings and JVM tuning.


It goes without saying, proper caching will get you the biggest improvement in site performance out of any other configuration you do.  In this instance we implemented 2 different types of caching to help improve page response time and also server capacity.

Application Caching

Since almost every aspect of the site from navigation to page content is driven by user contribution in WebCenter Content, there ends up being a lot of calls to the content server in order to retrieve all this information.  This dates all the way back to the time of Stellent where one of the main optimization points of any application that utilized the content server was to reduce the number of service calls that were being made.

In the un-cached implementation of the intranet site, a content page could end up making close to 20 service calls to UCM in order to construct everything that was needed.  To reduce this, we leveraged Oracle Coherence to cache as much application specific data as we could.  This reduced the service calls for a page from around 20 to 1 or 2 after the data was cached.

Here is an example of caching the data structure representing the navigation.  Picking a key for the cache became important since we needed to support the various features of our framework (personalization, multi-site/space, multi-lingual).

[sourcecode language=”java”]
NamedCache cache = CacheFactory.getCache(IntranetUtil.MEGAMENU_CACHE_NAME);
cache.put(userName+"_megaMenu_"+language+"_"+spaceName, megaMenu, IntranetUtil.CACHE_TIMEOUT);
List cachedMenu = (List)cache.get(userName+"_megaMenu_"+language+"_"+spaceName);

Portlet Response Expiration Caching

A vast majority of the site utilizes JSR 286 content consumption portlets to pull back and display content.  Inherently, the portlet container implementation in WebCenter provides a performance benefit over ADF taskflows since portlets will render in parallel.  However, this alone was not enough to get the intranet’s home page fast enough for common use.

The home page in particular has 6 content portlets on it as it provides a nice dashboard view of current news and content to the user.  If we look at what happens on the backend for a portlet call, we quickly realize that a cache needs to be put in place.

Portlet Request Flow

Each user who requests the home page generates 6 concurrent requests to the portlet server and 6 requests to the content servers.  This can quickly get out of hand as user load scales up.  In fact, bottlenecks show up in multiple places:

  • Portlet request queue from Spaces to Portlet server
  • Portlet server response time due to load
  • CIS (Content Integration Suite) connection pool in portlets
  • Content server available socket connection pool
  • Content server database connection pool

To alleviate all of this we turned on expiration caching for the portlets, which is part of the JSR 286 specification (More info).  With this turned on the html response from each portlet instance is cached on the Spaces managed server on a per user basis (once again to support personalization).  This completely removed the bottlenecks since the home page is only rendered once every 30 minutes for each user.


In part 1 of this series we focused on proper caching setup to reduce load on the system and provide much better page response time for users.  In part 2 we will discuss web server configurations and JVM tuning to provide even better page response time and increase the number of user sessions the system can handle at once.

4 thoughts on “Performance Tuning WebCenter Portal and WebCenter Content for a High Concurrency Intranet – Part 1

  1. Hi Andy,

    We are looking for ways to achieve application caching for our webcenter portal application. I realize oracle coherence is meant for distributed cache and powerful for availability and reliability. How better will it be to keep things in a named cache rather than ADF application scope ? We have to serve around 20000 users who will eventually bring in stuff from ucm, soa, obiee, discussion server etc.


    • Hey Sunil,

      Keeping things in the ADF application scope will certainly work from a caching perspective and will get you the same functionality as a NamedCache. From a pure read/write perspective neither is faster than the other because they are both backed by Map objects in memory. A few things to take into consideration:

      1. Application scope is stored individually on each node of the cluster and there is no replication between them. So in the case of a failover you’ll lose some cached data.

      2. Since the data is stored individually you are duplicating some cache data. Note: Local coherence caches act the same way as app scope so not gaining anything if you are using that configuration.

      3. Using a NamedCache allows you to offload the memory footprint of the cache from the managed server JVM into a separate JVM. This will help to keep your JVM heap sizes in a reasonable range.

      4. Looking into the future if you run into performance problems or need to scale to even more users coherence gives you a lot more flexibility than application scope caching. If you started with local named caches (cheap version of the coherence license) you won’t need to change any code if you need to scale to an enterprise or grid deployment.

      Hope that helps,


  2. Dear Andy,
    I just want to know how you do the JVM tuning and the excat process to know what all value you need to put specifically.
    Also when is your second and third part getting released. desparetly waiting for the same.


Leave a Reply