Recently in BEA/Oracle Category

WCI Recurring Jobs Don't Party in 2013

| | Comments (0)
post-party-mess.jpgHappy New Year!

Now that the holiday parties are over, we get to deal with the mess that comes so often in technology when calendars turn over. The mess I found myself facing this morning is due to a "feature" of WCI, so you may have it too.

Recurring jobs are set to run on an interval that has an end date. The portal UI defaults to an end date of Jan 1, 2013. Any pre-existing job that was set to run periodically and to use the default end date is no longer scheduled to run. This includes many crawlers, syncs, maintenance jobs, and so forth. Any new job set to run on a recurring basis defaults to Jan 1 2013 which since it's in the past will cause the job to run once but never again.

You can query the portal database to get a list of jobs that [1] ran as recently as December and [2] aren't scheduled to run again. This is the list of likely candidates that would need to be rescheduled. Also, the query gives URL suffixes to let you easily create links to open the job editors. In your case, you may want to put the full URL to your admin portal in there. In my case, I used this query for many systems with different prefixes, so I kept it generic. Here's the query I used:

      ,u.NAME 'owner name'
      ,'' + cast(j.objectid as varchar(9)) 'admin editor page suffix'
  where NEXTRUNTIME is null and LASTRUNTIME > '12/1/2012'
  order by [owner name]


Update: This is now BUG:15947880 and KB Article:1516806.1.

ptdell.jpgUniverse: I am resigning from Oracle.

I know the universe of interested parties shrinks every year as the sales of the WCI portal (née Plumtree) decline, Oracle promotes a different product, and old customers move on to new platforms. But! Some of you are still out there reading, and so thanks!

Fortunately for you all, I'm not going far. I'll continue working with the WCI portal for a long-time customer, Boeing, for whom I've consulted off and on, but mostly on, since 2004. So the blog entries will continue to sporadically pop into your RSS feeds.

I have three company laptops that I need to return. The newest one Oracle issued to me several months ago, and I'm sure it will be redeployed to another employee. The older ones, however, will likely be "decommissioned." Occasionally I read stories about crooks who buy old hard drives to recover their data and then engage in all sorts of nefarious crimes. I don't want my data open to that risk. Since I don't know exactly what Oracle's decommissioning process is, and since any company's processes may not be perfectly followed, I decided to take extra care to destroy the personal, customer, and corporate data that had been on the hard drives.

So here's what I'm doing tonight, and you probably should do something similar when you let go of your old laptops, whether you're disposing of an old personal machine or resigning from the job that had run its course:

  1. buddha-baby.jpgCopy any needed data off the old laptop (e.g. this photo from when kiddo was a newborn)
  2. Create a "live cd" or a bootable disk with a *nix operating system on it. I used Ubuntu (get it).
  3. Boot your old laptop from the CD. On my Dell laptop, I used F12 to get a one-time boot menu to select that I wanted to boot from CD rather than from the hard drive.
  4. Identify the partition name for your disk. I did this by going to System -> Administration -> GParted Partition Editor.
  5. Open a console.
  6. Type a command like this one at the prompt, where /dev/sda2 is my laptop partition to wipe:

    sudo shred -vfz -n 1 /dev/sda2

  7. Wait while the machine overwrites your entire disk first with random data, then with zeros.

That's it. There's not much left to find on the drive. This is a much better approach than just reformatting the drive, because reformatting merely clears the address tables for the disk but still leaves the data intact and retrievable by Dr. Evil who makes his business doing such things. Of course, you could be more fastidious than I was. Another blog gives a more detailed review of the technical issue and even more thorough ways to knock it out.

After erasing the data, I went the extra mile to installed Ubuntu. This way anyone who turns on the computer will be able to log in and see that nothing is readily available, and they'll also find it to be a generally useful machine.


PS: Yes, I'm extraordinarily happy to move on from Oracle!
Publisher is an old product, but it still has legs in some organizations. I recently helped a customer set up Publisher to load balance the portion of the app used by browsing users, the readers, of published content. The discussions about how to set this up were difficult until I diagrammed the components clearly.

If you ever need to work with Publisher, an especially if you want to increase reliability of the reader component, then I hope this diagram will be helpful to you.



F5 Terminology Cheat Sheet

| | Comments (0)
corporate acronyms.jpgTechnology is a land of overlapping and confusing terminology. I've been involved in plenty of confusing conversations about F5 products as they relate to WCI portal deployments, and I've worked to develop a more precise use of terms. To help a colleague sort out the mishmash, I made this list of objects we commonly discuss. Maybe you'll find it useful too?

In addition to understanding the terms, I think it's helpful to recognize areas of overlap and be careful to avoid confusion. For example, since the VMWare team thinks "virtual servers" run an operating system and the F5 team thinks "virtual servers" represent pathways through their network, I like to say "F5 virtual server" or "VMWare virtual server."

Global Traffic Manager -- GTM (routes requests to the appropriate LTM)
- Wide IPs represent services. An URL is associated with the Wide IP so that users can route through here. Wide IPs can have iRules.
- Pools are configured under Wide IPs.
- Members are assigned within the pools. We create a region1 and a region2 member. These members point to the IP addresses and ports of LTM virtual servers. Normally (but not always) they are given names that match the LTM virtual servers.

Local Traffic Manager -- LTM (routes requests to the appropriate application servers)
- Virtual servers represent services. They have IP addresses and they listen on a port. They can have iRules. When multiple host names are required for the same service, those host names can all alias to the IP of the virtual server (e.g. http://portlets and http://portlets2).
- Pools are configured under virtual servers. One pool can be used by multiple virtual servers, as we do in an environment with the imageserver pool, since we need both HTTP and SSL access to those resources. The customer usually assigns monitors to these, and the monitor applies to every member in the pool.
- Members are assigned within the pools. They are represented by the IP address of the server hosting the service and the port of that service, though the port doesn't have to be the same one used by the virtual server. Customers doesn't usually assign monitors to these, though it could be done.
- Nodes we don't talk about much. These are the IP addresses of the servers that are later combined with ports to be members.

- Wide IP:
- URL:
- Pool of Wide IP: app-portlet
- Members of Pool: port 80, port 80. Member names are app-portlet-reg2-80 and app-portlet-reg3-80

- Virtual Server: Name app-portlet-reg3-80 with IP address and port 80
- Pool of Virtual Server: app-portlet-reg3-80 with monitor
- Members of Pool: and
- Nodes of Members: and

Here is a picture of the LTM's network map view. This shows the virtual servers, their pools, and the members of the pools:


Want to understand F5's LTM in depth, everything from the objects above to session awareness, monitor configuration, iRules, and so forth?  Then I recommend you take "BIG-IP Local Traffic Manager (LTM) Essentials," the free, self-paced, 14 hour training course at You can follow training modules, then log into a cloud-based LTM to do configuration exercises. Even if you're not the person managing the device for your customer, you'll be able to ask for the right things by knowing so much. And you might even know about features your F5 team isn't aware of, and you'll then be able to push them to a new level of ROI from this product.


Dealing with frenemies and port conflicts

| | Comments (0)
Subtitle: How to identify which process is running on a port

Hi Folks:

I just found a surprise about a friend of mine. First I'll introduce: Gizmo5. First some background on how I met Gizmo5.

Do you know about Google Voice's offering? Google gives you a free phone number, then among other things, it lets you forward that number elsewhere. Where to forward it? One thing I wanted to do after moving to a new city (Helloooooo Austin!) was get a landline since my wife didn't get great cell reception at our new place. "A landline it is," I said, but continued to her mild displeasure, "but I want to try getting this set up without using AT&T." I searched for a good voice-over-IP phone service. I wanted something like Vonage, but I didn't want fees.

Gizmo5 is one of many free VOIP services, or SIP providers. Another I use is sipgate. Oh yeah, and there's Skype, but Skype charges a monthly fee for a phone number right? Something like that. Money was involved, so I didn't go there. Plus, I wanted to have more of a DIY solution. So the idea of these VOIP providers is they give a phone number that rings to an Internet-connected client. The easiest client is the laptop-based softphone that every SIP provider has. Here's the one from Gizmo:


But the Internet-connected client becomes much more interesting when the client is a simple, old-fashioned, landline style phone. This is what Vonage does.

So I bought an analog telephone adapter (ATA) from Grandstream for $45, and after a bit of configuration, I was able to plug my old landline phone into the ATA, then plug the ATA into my router, then have the ATA register itself with Gizmo5's servers to say, "when a call comes in to Bill's Gizmo5 account, let me know because I'm his phone." Then I had Google Voice forward my Google Voice number to that Gizmo5 number, and I'm in business. How cool is this? So cool that Google bought Gizmo5 and ended new registrations while they work on their integration plan. Don't worry though. You can set this up with a sipgate account too.

Anyway, I still have that Gizmo5 softclient running on my laptop from time to time. And today I fired up my WCI Automation service, and I messages like these in my PTSpy:

Automation Server cannot be initialized.
com.plumtree.openfoundation.util.XPException: Address already in use: JVM_Bind

InitForScheduler(): Unable to start communicator on port 7777 Address already in use: JVM_Bind

Hey, what's that about? I ran this command to see what was running on port 7777:

Netstat -a -n -o | GREP 7777

And the report came back:

TCP               LISTENING       3184

So what is running behind process 3184? I checked my task manager and found it's my friend Gizmo5 now acting as my enemy:


Since I don't know how to change the port of Gizmo5, I hop into my serverconfig.xml and change the automation server's port, restart, and I'm back in business with a fully functioning WCI system. Gizmo5 is no longer an enemy but a friend.

PS: The business model behind free SIP providers is they charge for outgoing telephone calls. Gizmo5 is a penny per minute. Sipgate is two cents per minute. But incoming calls are free, so? Initiate those long calls from Google Voice. Google will ring your SIP provider as an incoming call, then Google rings the party you wish to speak with, and it's free.

Someone asked this question today:

What does a web proxy server placed in front of the Portal give you, in terms of security (or anything else), when there is already an SSL Accelerator (F5 BigIP) managing the portal? The end user would still access the Portal on port 80.  Either way.  What does the extra server buy you?

In hopes a larger audience might find my answer useful, here you go. First though, I'll try the "picture is worth a thousand words" approach, using a slide from a presentation I did a couple years ago:


Now my take:

Consider this case: You have users on the public internet, and you don't want any of your app servers to be in the DMZ. So you put a proxy in the DMZ, and it can reach back through the firewall to the internal Big IP that can route traffic to the many app servers.

Why not put the Big IP itself in the DMZ and have it route from there? One reason is that it handles traffic for many more ports than you want open on the firewall (e.g. for search, directory, dr). But more importantly, Big IP needs to be able to monitor the members of its pools. So there's lots of chatter between it and the servers.

So there you've got the security angle.

Also, proxies sometimes offer additional features such as authentication. You may only have internal users, want your users to authenticate at your company proxy.

There's also improved performance when you can keep the portal in the same VLAN as the remote servers it uses to build pages. A single portal page load can generate dozens of DB queries and http requests to the remote tier. A proxy lets you keep users in the DMZ while keeping the portal near those resources.

WCI Settings Files: rules for construction

| | Comments (0)
rules.jpgThe world is full of rules. I was amused at a local Austin grocery store to find rules against something that seem pretty obvious: food trays are not sleds. Other rules though can be harder to figure out. In case you need to know some of these less obvious rules:

I'm working on an effort to restructure WCI settings files, and a piece of this required understanding the rules for putting together a valid settings file. I hope to later explain the whole project, but until then, here's a subset of what I learned.

The Loose
WCI applications read in everything in the %WCI_HOME%\settings directory on startup. A default system would have these in c:\oracle\wci or some such location. That everything is read means WCI neither cares what your file names are nor what subfolders they may be in. For example, you can move .\settings\configuration.xml to .\settings\do-not-use\disabled.xml, and it will still work just fine. The system treats all information across all files as a single settings definition.

You can also break apart the out-of-the-box XML files into new smaller files, or you can rearrange their content entirely. This explains how it is that systems run WCI equally well for fresh installs versus upgraded installs even though each has differently structured XML files (for example, fresh installs store settings in configuration.xml that upgraded installs keep only in portal\portalconfig.xml and common\serverconfig.xml).

You can add settings in the XML files that are not required and not used by the system. For example, you can have a context or a component defined but never used.

The Strict
Within the config files, however, you'll find tightly linked context, component, and client sections. Some rules are:
  1. A context cannot be defined more than once.
  2. A component name cannot be used more than once.
  3. A component cannot have a subscribed client that is not a defined context.
  4. A client cannot subscribe to two different contexts of the same component type.
An Example
Now is a great time for an example. The following file sits on my system as %WCI_HOME%\settings\example.xml. When the system starts, this file is read into the settings definition, though nothing in it will be used by my applications. The system runs just fine, and it will continue to do so unless I uncomment any of the sections of the config file that are designed to break the four strict rules I previously listed.

Download the file so you can load it in a readable XML parser, load it on your system, or tweak it. You can also try reading it in less readable format below.


<?xml version="1.0" encoding="UTF-8"?>
<OpenConfig xmlns="" xmlns:xsi="">
    <context name="example-context"/>
<!-- ERROR 1: uncomment the below client to create "context with this name already exists" error -->
    <context name="example-context"/>
    <!-- include the below context to illustrate that listed contexts need not be used -->
    <context name="example-context-unused"/>
    <component name="example-component" type="">
        <setting name="sometype:something">
            <value xsi:type="xsd:boolean">true</value>
            <client name="example-context"/>
            <!-- ERROR 2: uncomment the below client to create "context could not be opened" error -->
            <client name="undeclared-context-breaks-system"/>
    <!-- include the below component to illustrate that components need not have clients -->
    <component name="example-component-no-clients" type="">
        <setting name="sometype:something">
            <value xsi:type="xsd:boolean">true</value>
    <!-- ERROR 3: uncomment the below component to create "component with this name already exists" error -->
    <component name="example-component-no-clients" type="">
        <setting name="sometype:something">
            <value xsi:type="xsd:boolean">true</value>
    <!-- ERROR 4: uncomment the below component to create "context already subscribes to component of type" error -->
    <component name="example-component-duplicate-type" type="">
        <setting name="sometype:something">
            <value xsi:type="xsd:boolean">true</value>
            <client name="example-context"/>

ALUI/WCI SSO Login Sequence and Log Files

| | Comments (0)
sequence.gifYou can't trust your web server logs to tell you how many pages your portal users view. When logging in, especially under SSO, the login sequence generates several "GET /portal/ " lines. I dug into this today, and the results may be helpful as you look to infer portal usage from log files.

Yesterday I turned to IIS logs to determine some usage patterns in the portals I work with where users can enter through two different SSO systems. I started my search by looking at how many times SSOLogin.aspx occurred for each SSO system (hosted on different servers). When the results appeared material, today I wondered whether the load for the systems are different. Do the users of one SSO system have a more engaged portal session?

First I counted simply "GET /portal/" in the log files, and I though one set of users had far more pages per session than did the other. However, I then realized that gateway images were returned by my search pattern, so I added a space: "GET /portal/ " This made the traffic look much more similar.

But I still didn't know how many actual pages the user sees. What happens in the login sequence?

What I found was:

* It is hard to identify actual pages per visit because the IIS log sometimes shows 3 and sometimes 4 requests per login.
* A user's login generates three lines in the IIS log with "GET /<virtualdirectory>/ "  when the user enters the portal through http(s)://<portalhost>/
* A user's login generates four lines in the IIS log with "GET /<virtualdirectory>/ "  when the user enters the portal through http(s)://<portalhost>/<virtualdirectory>/

The login sequence as found in IIS logs looks similar to this:

1. The unidentified user enters without specifying the <virtualdirectory>/, then redirects to the SSO login

2. The SSO-authenticated user is redirected to the portal from the WSSO login

3. The SSO-authenticated user is directed to the portal's SSOLogin sequence to process the SSO token and become portal-authenticated

4. The portal-authenticated user runs a login sequence to determine the proper home page behavior
/portal/ open=space&name=Login&dljr= 

5. The user lands on the proper home page

I hope that's helpful.
Here's another workaround.

Download this post, the batch file it refers to, and the wget utility from

This describes a way to get results similar to the ALUI portal's Cached Portlet Content feature of the ALUI portal. This is useful for users of Oracle's WebCenter Interaction 10gR3, a release that has a bug (No.  8689121) that causes this feature to otherwise be unavailable. As the bug database describes it, "WHEN "RUNNING PORTLETS AS JOBS", THE JOB WILL FAIL."

Cached Portlet Content Feature
You can read about the Cached Portlet Content feature at As that page describes, "You might occasionally want to run a job to cache portlet content (for example, if the portlet takes a couple minutes to render). When the job runs, it creates a snapshot of the portlet content (in the form of a static HTML file) that can be displayed on a web site. The file is stored in the shared files directory (for example, C:\bea\ALUI\ptportal\6.5) in \StagedContent\Portlets\<portletID>\Main.html. You can then create another portlet that simply displays the static HTML."

The alternate way to get cached portlet content is to create an external operation that will call the URL of the desired content and then will save it to the automation server's file system. This uses wget.exe, a program that is standard on UNIX environments and that is distributed with this workaround for Windows. The port I use is from

1. Put wget.exe into the %WCI_HOME%\ptportal\10.3.0\scripts directory of your automation server (e.g. D:\bea\alui\ptportal\10.3.0\scripts). This application allows you to access web pages from the command line and then to save them to the file system.
2. Put the wget-extop.bat file into the %WCI_HOME%\ptportal\10.3.0\scripts directory of your automation server.
3. Test that it works by opening a command prompt on your automation server to %WCI_HOME%\ptportal\10.3.0\scripts, then running a command like this one:

"wget-extop.bat" target-homepage

When that command finishes, you should see a success message similar to the following:

20:46:28 (104.98 KB/s) - `..\StagedContent\portlets\target-homepage\Main.html' saved [80621]

4. Make sure logging works properly. You should find a file in %WCI_HOME%\ptportal\10.3.0\scripts named wget-extop.log. Open that file and see that it recorded your recent action.

5. Make sure the action downloaded the webpage. You should find it in a location like %WCI_HOME%\ptportal\10.3.0\StagedContent\portlets\target-homepage\Main.html.

6. Open the portal and create an external operation object. On the main settings page, enter an Operating System Command like this:

"wget-extop.bat" target-homepage

The command has three parts. First it names the batch file you'll use. Second, it gives the URL to download. Third it gives the identifer for this download that will be the directory in which the downloaded content will be stored. Be careful to use only characters in the identifer name that work as directory names. An identifer like "" will not work because you cannot have slashes in a directory name. Your command may be this:

"wget-extop.bat" about-our-company

7. In the portal, create a job that will run your external operation. Schedule it to run at the appropriate interval.

The contents of wget-extop.bat should be as follows:


set arg1=%1
set arg2=%2

md ..\StagedContent\portlets\%arg2%

echo %date% - %time% --- wget %arg1% -O ..\StagedContent\portlets\%arg2%\Main.html >> wget-extop.log
wget %arg1% -O ..\StagedContent\portlets\%arg2%\Main.html


This workaround does not offer all the features that the Cached Portlet Content feature normally has. The main reason for limitations is that this request uses wget rather than the portal engine to request content. The request therefore has no access to portlet preferences and so forth. While this workaround is sufficient in some cases, it does not claim to work in all.


Bill Benac
October 2009

In software development, we can sometimes have maddening debates about whether something is a feature or a bug. This reminds me of an old Phish song: Windora Bug.

"Is that a wind? Or a bug? It's a Windora bug." In other words, it's both. While troubleshooting your system, you might want to listen to the mp3.

In WCI 10gR3, we find the collision of two reasonable features. I think together they make a bug. Or at least, a badly designed feature. So let's start with the old feature:

Sometimes agents outside the portal need to authenticate in. Users count as agents, and so do remote portlets. To allow the agent to log in without providing a password each time, the portal can send a login token that the agent can use for future portal connections. Two old examples of this are [1] when a person uses the "Remember my Password" feature of the portal login screen (usually valid for many days) and [2] when a remote portlet web service sends a login token to the remote service (usually valid for five minutes). The login token held on the remote tier by the agent can be decrypted by the server using its key. This works fine in both the old use cases I provided because the remote tier is handed this value by the portal server.

For whatever reason, you may decide every once in a while that there is a security issue related to saved passwords. The portal had a great feature in the old days to let you update the login token's root key and thereby invalidate these old login tokens forcing users to reauthenticate. The tool for the reset is in the administrative portal under the Portal Settings utility, and it looks something like this:

update-login-token-key.jpgWhen you click that "Update" button, it connects to the portal database and generates a new login token root key, stored in PTSERVERCONFIG with settingid 65.

The trouble comes in with the new feature. In 10gR3, the portal introduces new applications that encrypt passwords based on the login token root key, but this is done at configuration time in the remote application's Configuration Manager. The problem is that those applications are built apparently assuming that the login token root key will never change. The configuration manager requires that you provide the login token root key to it directly. Applications that do this include but are not limited to the Common Notification Service and Analytics. For example:

update-login-token-analytics.jpgThe upshot of all this is that if you choose to click that button in the Portal Settings utility, then you get a new login token root key that no longer matches the one relied on by your remote applications.

If this part of the portal were reconceived, then perhaps the database would have one login token root seed used for agents with a transient token such as those given to users and through remote web service calls that are used to let the agent come back. Those keys basically say, "you've been here before, and you can come back." Then the database might have a second root seed for applications that need permanent access to the portal. In that case, the update feature would be fine, and it would only apply to the key for transient agents.

Oh well. We have to live with it. So to avoid administrators accidentally breaking remote applications, I suggest you update the portal UI to explain the full effect of this particular feature (if you don't want to go through the headache of an involved UI modification to entirely remove it). I did this and now have the following:

update-login-token-key-new.jpgI got there by modifying this file on the admin servers:
Within it I changed strings 2134, 2135, 2136, and 2964. My file has no other modifications in it from the vanilla 10.3.0 version. You can download it here.


About this Archive

This page is a archive of recent entries in the BEA/Oracle category.

Java is the next category.

Find recent content on the main index or look in the archives to find all content.