Results tagged “ALUI” from Bill Benac

Detailed Diagram of ALUI Publisher 6.5 Components

|
Publisher is an old product, but it still has legs in some organizations. I recently helped a customer set up Publisher to load balance the portion of the app used by browsing users, the readers, of published content. The discussions about how to set this up were difficult until I diagrammed the components clearly.

If you ever need to work with Publisher, an especially if you want to increase reliability of the reader component, then I hope this diagram will be helpful to you.

Enjoy!

publisher-drawing.jpg

ALUI/WCI SSO Login Sequence and Log Files

|
sequence.gifYou can't trust your web server logs to tell you how many pages your portal users view. When logging in, especially under SSO, the login sequence generates several "GET /portal/server.pt " lines. I dug into this today, and the results may be helpful as you look to infer portal usage from log files.

Yesterday I turned to IIS logs to determine some usage patterns in the portals I work with where users can enter through two different SSO systems. I started my search by looking at how many times SSOLogin.aspx occurred for each SSO system (hosted on different servers). When the results appeared material, today I wondered whether the load for the systems are different. Do the users of one SSO system have a more engaged portal session?

First I counted simply "GET /portal/server.pt" in the log files, and I though one set of users had far more pages per session than did the other. However, I then realized that gateway images were returned by my search pattern, so I added a space: "GET /portal/server.pt " This made the traffic look much more similar.

But I still didn't know how many actual pages the user sees. What happens in the login sequence?

What I found was:

* It is hard to identify actual pages per visit because the IIS log sometimes shows 3 and sometimes 4 requests per login.
* A user's login generates three lines in the IIS log with "GET /<virtualdirectory>/server.pt/ "  when the user enters the portal through http(s)://<portalhost>/
* A user's login generates four lines in the IIS log with "GET /<virtualdirectory>/server.pt/ "  when the user enters the portal through http(s)://<portalhost>/<virtualdirectory>/server.pt

The login sequence as found in IIS logs looks similar to this:

1. The unidentified user enters without specifying the <virtualdirectory>/server.pt, then redirects to the SSO login


2. The SSO-authenticated user is redirected to the portal from the WSSO login
/portal/server.pt 

3. The SSO-authenticated user is directed to the portal's SSOLogin sequence to process the SSO token and become portal-authenticated
/portal/sso/SSOLogin.aspx 

4. The portal-authenticated user runs a login sequence to determine the proper home page behavior
/portal/server.pt open=space&name=Login&dljr= 

5. The user lands on the proper home page
/portal/server.pt/community/superstuff/204 

I hope that's helpful.

How to Better Revive a Failed Search Node (and Why)

|
I've been working with the same technology stack for an amazingly long nine years. This has given me much opportunity to work with the same types of issues over and over, and in doing so, I've refined my approach quite a bit. Thus, here's a post that is essentially an improvement on a two year old post, How to Revive a Failed Search Node. I hope this post will offer both a better description of the problem and a better solution to it.

The WebCenter Interaction search product has two features that can interfere with each other. First, on the search cluster, you can schedule checkpoints to essentially wrap up and archive the search index to give you the ability to later restore it. Second, on the search nodes, at startup the node's index looks to the directories on the search cluster to synchronize in a copy of the latest index.

Customers running both checkpoints and multiple nodes periodically encounter trouble because the checkpoint process removes old search cluster request directories that the nodes want to access. So if you have one of your search nodes go down, but the other node keeps working and checkpoints continue to run on a daily schedule, then after a few days and by the time you realize one node had failed, then it won't start. It fails when it tries to access the numbered directory that had existed last time it had run properly. The errors in your %WCI_HOME%\ptsearchserver\[version]\[node]\logs  may look like these in such a case:

Cannot find queue segment for last committed index request: \\servername\SearchCluster\requests\1_1555_256\indexQueue.segment

Indeed, if you look at the path that was shown in the error, you'll find that the numbered folder no longer exists. Perhaps the latest folder will be SearchCluster\requests\1_1574_256.

The fix is to reset the search node so that it no longer expects that specific folder upon which it had been fixated. I wrote about a way to do this with several manual steps in my prior post. This time, however, and after encountering the problem perhaps tens of times, I'm sharing a batch file that I place on Windows search servers to automate the reset process (and this works on both ALUI 6.1 and WCI 10gR3):

set searchservice1=myserver0201
set search_home=c:\oracle\wci\ptsearchserver\10.3.0
@rem
@rem configure the above two variables
@rem
net stop %searchservice1%
c:
rmdir /s /q %search_home%\%searchservice1%\index\
mkdir %search_home%\%searchservice1%\index\1
echo 1 > %search_home%\%searchservice1%\index\ready
cd %search_home%\%searchservice1%\index\1
..\..\..\bin\native\emptyarchive lexicon archive
..\..\..\bin\native\emptyarchive spell spell
net start %searchservice1%

search-panel.jpgTo find the name of the search service that goes in the first parameter, open your Windows services panel, find your search node, right-click into its properties page, and find the "service name" value. This is not the same as the display name. The service name by default is [machine][node] as far as I can tell. So on my box (bbenac02) as the first node, my service name is bbenac0201. This is different from the display name, which defaults to something like "BEA ALI Search bbenac0201."

Enjoy!

Tunings for the LDAP IDS Sync

|
gone_running.jpgAre you worried about your LDAP IDS sync's running time? If your system is relatively small, you may not think much about it. You run them each night, or maybe even several times during the day, and life is good. However, on systems that push beyond several hundred thousand users, the performance of this product may become important. An obscure setting can cut the time in half.

This week I reinstalled the IDS, and after running it with default settings, my sync ran in about eight hours. After some tunings though, the job usually finishes in three and a half hours.

Cached Objects
The most important tuning done in %WCI_HOME%\ptldapaws\2.2\settings\config\ldap\properties.xml. I increased the MAX_CACHED_USER_OBJECTS setting from the meager 20000 to instead 1000000.

Memory
With this increased cache setting, you may also find you need to increase memory allocation. Do this in
%WCI_HOME%\ptldapaws\2.2\settings\config\wrapper.conf:

# Initial Java Heap Size (in MB)
wrapper.java.initmemory=1024

# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=1024

Session Timeout
Another tuning we use that may be necessary when you run larger synchronization batches is to increase the session timeout period within the ldapws.war file's web.xml. The default session-timeout is 60 minutes, but we run at 600.

Enjoy!

The title tells me this is going to be boring to some readers, but I guarantee those who arrive from Google searches will be glad for it--especially since an early discussion of this that used to be on forums.bea.com is now gone.

A funny problem arises when you take certain portlets written in .NET and load them into ALUI 6.5 or Oracle 10gR3 portals. The portlets at issue use the EDK/IDK's PRC library. Even though these portlets work fine when loaded into ALUI 6.1 portals, they suddenly fail with this deceptive message:

The underlying connection was closed: The server committed an HTTP protocol violation. at com.plumtree.remote.prc.soap.QueryInterfaceProcedures.GetVersionInfo() in e:\latestbuild\Release\devkit\5.4.x\prc\src\dotnet\portalprocedures\QueryInterfaceProcedures.cs:line 35 at com.plumtree.remote.prc.xp.XPRemoteSession.getAPIVersion() in e:\latestbuild\Release\devkit\5.4.x\prc\src\xpj2c\portalclient\com\plumtree\remote\prc\xp\XPRemoteSession.cs:line 352 at Plumtree.Remote.PRC.RemoteSessionWrapper.GetAPIVersion() in e:\latestbuild\Release\devkit\5.4.x\prc\src\dotnet\portalclient\RemoteSessionWrapper.cs:line 144 at apicheck.WebForm1.PageLoad(Object sender, EventArgs e) in d:\www\api-check\portlet.aspx.cs:line 60


The real problem though is that the application's web.config file needs an extra setting block dropped in just beneath system.web:

<system.net>
<settings>
<servicePointManager expect100Continue="false" />
</settings>
</system.net>


You'll have to do this with your custom portlets, but it's may also necessary with portlets that come from Oracle/BEA. I've encountered it with the SharePoint Console, for example.

Enjoy (no hugs though),

Bill

Spreadsheet to Generate URLMapping Entries

|
The URLMapping section of portalconfig.xml isn't the most elegant part of the ALUI portal configuration. You are required, for each URL you intend to support, to repeat a block of settings but with incremented index values. I'm working with a customer using dozens of URLMappings, and we realized there had to be a better way than updating these individually. Enter Excel.

I created two spreadsheets, and the one you use depends on your environment and preferences. The simpler urlmapping-generator.xls creates a single mapping for each single URL you want to use, and this is what you would expect is required. However, as I wrote last year, there's a bug in how URLMappings are handled when a proxy or load balancer is involved, and the way to fix it is with an extra URLMapping. I have an advanced spreadsheet, urlmapping-generator-proxybugfix.xls for this situation, and it creates both the first mapping you would expect as well as the second mapping to handle the bug.

I hope this is helpful.

Fully Disable Search in ALUI Collab 4.5

|
I'm working with a Collab deployment where we want to disable the search feature. This involves two things: disabling index requests from Collab to the Search Server and removing the search form from Collab's web UI.

Disabling Index Requests from Collab to Search

The easy part is disabling index requests from Collab to the Search Server. Just open %ALUI_HOME%\ptcollab\4.5\settings\config\config.xml and change the value for "search enabled" from "yes" to "no."

Removing Search from Project Explorer UI

For whatever reason, the Project Explorer view of Collab shows a search form even when the app is configured to not have its content index. This is shown below:

collab.projexp.search.before.jpg

















The only way I know of to remove the form is to modify the JSP files within collab.war. The process for modifying collab.war is:

  1. Open %ALUI_HOME%\ptcollab\4.5\webapp
  2. Backup webapp.war to webapp.war.orig
  3. Create a new subdirectory called build
  4. Extract collab.war into a new directory at %ALUI_HOME%\ptcollab\4.5\webapp\build
  5. Make the appropriate edits (described below)
  6. Package up the contents of %ALUI_HOME%\ptcollab\4.5\webapp\build
  7. Stop Collab
  8. Replace the old collab.war with the new one
  9. Start Collab
In the case you want to remove the search UI, as in this case, make the following edits:

  1. Open %ALUI_HOME%\ptcollab\4.5\webapp\war\project\appView\projectToolbar.jsp
  2. Comment out these lines:

        <jsc:toolbarHtmlBlock align="right"><nobr>
            <fmt:message key="project.search"/>
            <input type="text" id="searchText" class="formInputBoxText" align="center"/>
            <select id="searchScope" class="objectText" width="0%">
                <option value="all"><fmt:message key="project.search.scope.all.folders"/></option>
                <option value="current"><fmt:message key="project.search.scope.current.folder"/></option>
            </select>&nbsp;
        </jsc:toolbarHtmlBlock>
        <jsc:toolbarHtmlBlock align="right"><nobr>
            <collab:img src="search.gif" border="0" align="center" onclick="searchHandler()"/></nobr>
        </jsc:toolbarHtmlBlock>

  3. Open %ALUI_HOME%\ptcollab\4.5\webapp\war\layout\templates\appViewSearchBar.jsp
  4. Comment out these lines:

        <input type="hidden" name="projID" value="${fn:escapeXml(baseAppViewBean.currentProject.ID)}">
        <input type="hidden" name="isAppView" value ="true">
        <input type="hidden" name="projRestr" value ="1">
        <input type="hidden" name="collabType" value ="<%=com.plumtree.collaboration.cssearch.CollabIndexManager.OBJECT_TYPE_ALL%>">
            <td class="banText" align="${fn:escapeXml(tdAlign)}" width="50%" nowrap style="padding-right:5">
                <b><fmt:message key="project.search.label"/></b>
                <input size="32" name="searchString" value="" class="formInputBoxText">
                <collab:simpleUI>
                    <input type="submit" name="go" value="<fmt:message key="key.search"/>" class="formBtnText">
                </collab:simpleUI>
                <collab:notSimpleUI>
                <a href="#" onClick="document.searchForm.submit();"><collab:img src="search.gif" border="0" align="absmiddle" altKey="key.search"/></a>
                <a href="${fn:escapeXml(advancedSearchURL)}"><collab:img src="advancedSearch.gif" border="0" align="absmiddle" altKey="search.advanced"/></a>&nbsp;
                <input type="button" name="closeWindow" value="<bean:message key="button.close"/>" onClick="window.close()" class="formEditorBtnText">
                </collab:notSimpleUI>
            </td>

            <collab:simpleUI>
                <td class="banText" align="right">
                    <tiles:insert attribute="help"/>
                </td>
            </collab:simpleUI>


  5. Add the following lines in the place of what was just commented out:

    <td class="banText" align="${fn:escapeXml(tdAlign)}" width="50%" nowrap style="padding-right:5">
    <input type="button" name="closeWindow" value="<bean:message key="button.close"/>" onClick="window.close()" class="formEditorBtnText">
    </td>


  6. Save the files


The edits remove the undesired UI component from each project's individual view and the Project Explorer view, this latter being shown here:

collab.projexp.search.after.jpg

















By the way, these steps are a little tedious if you will need to tweak your customizations as you try to get it right. After setting up the directories and unzipping the original collab.war, I wrap the remaining steps in a script that lets me quickly run my iterations. That script's contents:

net stop "BEA ALI Collaboration"
c:
cd c:\bea\alui\ptcollab\4.5\webapp\build
zip -r collab.war *
mv -f collab.war ..\.
net start "BEA ALI Collaboration"


Note that I'm using command line tools "mv" and "zip" that come from the Unxutils project, a collection of Windows ports of Unix utilities.

That's it. Enjoy!


Recently I had a conversation with someone about the features available in ALUI for portlet developers. We spoke mostly about what the IDK offers, but there's more too. The IDK is largely about data access and content transformation on the back end, either at the remote server level or in the portal's "gateway" processing space. But much can be done on the browser side too using adaptive portlets. I wrote a guide for this a long while ago, and it's still relevant and helpful. So I'll post it here.

The guide starts with a Word document that gives an overview and screenshots of adaptive portlets. Then it gives installation instructions for the samples that are provided with the guide in its zip file. I'll put the first few pages of the Word document in this post so you can know whether you want to download the entire guide with its sample code.

Getting Started with  Adaptive Portlets

BEA uses the term "Adaptive Portlets" to refer to portlets using AJAX--technologies generally based in JavaScript and XML that allow richer application development.  A simple introduction to this technology is provided through the "Intro to Adaptive Portlets" community with its associated portlets. This document describes that community and how to install it on your own system.

The "Intro to Adaptive Portlets" community consists of several pages. The main page describes the adaptive portlet technology.  Subsequent pages of the community illustrate different design patterns individually, and then the last page shows all technologies together in a cacophonous celebration. Screenshots of the pages are below.

Main Page


Auto Refresh


Inline Navigator


Inline Post


Master/Detail


Broadcast Listener


All in One!


How to Remove Individual Components of Collab

|
Let's get to know how to easily do what the Collab installer doesn't let you do easily: manage its individual components.

At my current client, we're upgrading ALUI Collab 4.2 to 4.5 as part of our upgrade to ALUI 6.5. One feature of the new ALUI suite is that it offers a "common notification service" that replaces the collab-specific notification service that had been part of 4.2. Accordingly, we don't need the old Collab notification service. The official upgrade documentation says "If you have an existing Notification Service from your previous installation of Collaboration, disable or uninstall it." But how do you do that?

If you run the Collab 4.2 or 4.5 uninstaller on a machine with more than one Collab component, it will remove every component. That doesn't work well since you're trying to upgrade the Collab piece and remove the notification piece. Similarly, you may realize that you installed Collab 4.5's Search Service on the Collab box when you really want it on the Search box, so how do you remove just the Search Service?

In either case, you can use the batch file that is part of the undesired component, remove the service, then delete the files from the file system. In the case of the 4.2 notification, use this batch file: %ALUI_HOME%ptnotification\4.2\bin\ptnotificationserverd.bat, and in the case of the Search Service, use this batch file: %ALUI_HOME%searchservice\1.1\bin\searchservice.bat. Pass in the parameter "remove" and you're done.

Enjoy!

Firefox Plugin Shows Host Portal Info

|
ALUIportalhost.jpgAre you familiar with the routine of opening the HTML source of an ALUI portal page, scrolling to the bottom, and checking for the name of the host server? This is something I've done countless times in load balanced environments when trying to test or debug a server.

I decided to make a Firefox plugin that will extract that portal host information then display it at the bottom of the browser so that I can immediately see the portal host.

I've had several other people try this plugin, and they found it useful. I hope you'll find it helpful too. To install it, download the plugin, then drag it onto your Firefox browser. It's been tested on FF versions 1.5 through 3.5.

Enjoy!

Updated October 7:

Thanks to Andreas Mersch who improved on my original extension. With his addition, the browser will now display the portal performance information along with the portal host. The new version can be downloaded with the same link as before.

Configuring Proxies in ALUI Core Products

|
configure_proxy.jpgSept 7 2009 Update: There's a much easier way to configure the core products (not the RSS reader). This flash comes thanks to Igor Polyakov. It turns out you can have the Configuration Manager web application display a proxy configuration panel just by toggling it in an XML file. Just open %WCI_HOME%\descriptors\descriptor.application.analytics-console.2.5.xml, and find this:

<dependency descriptor-type="http://www.plumtree.com/config/component/descriptor-types/openhttp/1.0" visible="false">

Set the visibility to true, restart your BEA AL Configuration Manager, and viola, you'll be able to edit this through the UI.



To access the public Internet from many enterprise environments, one needs to configure the browser on his or her laptop to go through a proxy server. Sometimes, this requirement applies as well to the servers that run ALUI as well. With a browser, it's a fairly straight forward point-and-clickety-clickety-click to enter the proxy information, but with ALUI products, it's more involved. It seems like I always need to check my old emails to find configuration instructions, so I'll post here to make it easier for me, and hopefully easier for you too.

Of the several core ALUI products, only a few need proxy information. Products like the Search or Document Repository server of course do not need to make requests to resources outside of the ALUI servers. The portal is the most obvious component for doing so. You might have a some portlets provide by Google for example that users should be able to access. The portal's proxy is configured by updating %ALUI_HOME%\settings\common\serverconfig.xml to use the following settings:



A less obvious component that should be configured for proxy access is the automation server.  In some cases, portal administrators and content managers may choose to create a job that runs a portlet web service as its operation. One reason to do this is to generate the HTML that comes from a slow-running dynamic portlet. Antoer reason to do this could be if the code behind an URL ran some job. The automation server uses the same proxy setting configuration that the portal does in %ALUI_HOME%\settings\common\serverconfig.xml.

Finally, the new Remote Portlet Service's RSS Reader needs a proxy configured in order to get feeds outside the enterprise. The settings are to be put in %ALUI_HOME%\remoteps\1.0\settings\config\wrapper.conf. In myco's case, the proper settings were:

wrapper.java.additional.22=-Dhttp.proxyHost=www-proxy.myco.com
wrapper.java.additional.23=-Dhttp.proxyPort=31060
wrapper.java.additional.24=-Dhttp.nonProxyHosts="localhost|*.myco.com"

It is important to follow the example settings in the file correctly. The nonProxyHosts setting needs to be in quotation marks, but the proxyHost and proxyPort should not be.

It is also important to not follow the example settings with regard to the setting number. The file suggests:

#wrapper.java.additional.19=-Dhttp.proxyHost=
#wrapper.java.additional.20=-Dhttp.proxyPort=
#wrapper.java.additional.21=-Dhttp.nonProxyHosts=

However, 19, 20, and 21 are used by previous settings, so the proper wrapper.java.additional numbers will be increased as shown in the myco example.

Enjoy!

A colleague, Jeff, called today to ask about various performance-related items including ALUI portal caching.  Many people know that portlet developers and administrators can coordinate to control caching of portlet content. This lets the company news portlet content, for example, be fetched once in an hour into memory and then be shared between all users, while the paycheck content is fetched for each individual user. Less well known though is that the system administrator can tune how the portal server handles the cache. How many of those paychecks for example should be stored in memory before being replaced?

So Jeff and I chatted a bit about the options and effects of tuning portal server caching today. I was reminded of analysis I did at one customer which was interested in whether performance (on version 5.0.4) could be improved by using more of its spare portal server memory for caching. We found that it isn't worth the trouble to tune away from the default settings because the system works great out of the box. I believe the analysis applies to current versions as well.

I'm not writing this as a blind fanboy who sees nothing but the brilliance of the ALUI product. Rather, after a careful analysis, I realized the tunings really are fine. You'll probably agree with me when you consider how the portal caching works. Basically, the portal stores content in its cache based on how recently used it is. So the more frequently used content is, the more likely it is to regain its place on the top of the list of items to cache. When an item is infrequently used, it will be pushed down the cache list by other items and ultimately get pushed out. But this means that the most important content, the most frequently used, is always on the top of the list. All that content toward the bottom of the list doesn't get used frequently anyway, so who cares if it gets pushed out of cache? And if you triple the cache size, then you just store several times more unimportant content in cache.

If you never knew the portal cache settings could be tuned, then forget you read this post because it doesn't matter. But if you were casting about on the Internet looking for information about this topic, stop while you're ahead! Don't bother fixing what isn't broken. It would be better for you to tinker with your image server's content expiration headers for something that will really have impact.

caching-analysis-results.jpgWant data? Feel free to read the presentation I did on the results of my analysis. You'll see that when we tripled the cache size, we gained only insignificant reduction in requests to remote servers.


I've seen a couple questions lately about ways to connect to the ALUI API Service. There are two ways to use the EDK to connect to the API server. Which way you connect depends on the intend of the specific application.

One way to connect is with a specific username and password. That works well if you need the API server to access information or do actions not necessarily available to the browsing user. Let's take the example where you want to report the number of portlets in the system.

If you want all users to see the total number of portlets including portlets they do not have access too, then you would specify a privileged user for the API connection so that it could query everything. This would be hardcoded or set in admin preferences.

If you want users to see only the number of portlets to which they have access, then you would make the API connection using the credential of the current user. This uses something called the login token, and it is obtained programmatically.

I'll give example code for both, but let me first say that in either case, a variable should be set in the web.config file. The API call will want to know where the PTHOME directory is, and that's is best set in web.config rather than hardcoded. So with either solution, this should be in web.config:

  <appSettings>
    <add key="pthome" value="i:\\apps\\plumtree"/>
  </appSettings>

Then either case, you'll grab that variable from your code.

So if you want to connect using a specific user (usually one with elevated permissions), then you would do it like this:

  // create a session to portal and connect
  string sAdminName = ConfigurationSettings.AppSettings["admin-name"];
  string sAdminPass = ConfigurationSettings.AppSettings["admin-pass"];
  string sPTHome = ConfigurationSettings.AppSettings["pthome"] + "";
  com.plumtree.openkernel.config.IOKContext context = com.plumtree.openkernel.factory.OKConfigFactory.createInstance( sPTHome + "\\settings", "portal");
  PortalObjectsFactory.Init(context);
  IPTSession session = PortalObjectsFactory.CreateSession();
  try
  {
           session.Connect(sAdminName,sAdminPass,null);
  }

The admin-name and admin-pass for a production application should be set with an administrative preference and retrieved through the EDK so that they are not visible in the source file.

And if you want to connect with the individual user's context, then do this:

  // connect a session
  string sPTHome = ConfigurationSettings.AppSettings["pthome"] + "";
  com.plumtree.openkernel.config.IOKContext context = com.plumtree.openkernel.factory.OKConfigFactory.createInstance( sPTHome + "\\settings", "portal");
  PortalObjectsFactory.Init(context);
  session = PortalObjectsFactory.CreateSession();
  // we'll use the context of the logged in user
  try
  {
            session.Reconnect(edk.GetRequest().GetLoginToken());
  }

I have a sample portlet application attached that uses the user's individual context to connect to the API service and gather some information. Its zip file is here. It has a folder called install-resources with some install instructions.

I hope that helps,

Bill

So you can't live with an url like http://company.com/portal/server.pt? Luckily, you've got options. This blog discusses in detail how you can change that URL with ALUI 6.1 MP1 P1. It more briefly explains how to do it on earlier and later versions too.

First take care of these prerequisites:

  • Install base ALUI 6.1 MP1
  • Install P1
  • Load portal at http://company.com/portal/server.pt. It works!
  • Because of a bug in MP1 related to alternate virtual directories, you need to also install the CF AquaLogicInteraction_6.1.1.325722. This does not apply to earlier or later releases. I'm sorry to say the only way to get that CF is to ask customer support for it. That's how CFs work.

But as we already decided, you don't like that default URL. You want to remove /portal entirely. So set up your machine with Metabase Explorer to easily copy your IIS config from the /portal virtual directory to your root directory. Metabase Explorer is part of the IIS Resource Toolkit available here.

Let's use it:

  • Launch Metabase Explorer (Start->IIS Resources->Metabase Explorer->Metabase Explorer).
  • Navigate the tree to the /portal virtual directory. This may be at LM/W3SVC/1/ROOT/portal.
  • Within /portal, right-click on ScriptMaps and copy.
  • Within /ROOT, right-click on ScriptMaps and paste.
  • Within /portal, right-click on Path and copy.
  • Within /ROOT, right-click on Path and paste.

Now you've got IIS configured. Let's move to the portal and get it squared away:

  • Open %ALUI_HOME%\settings\portal\portalconfig.xml and edit it to remove "portal/" from the following settings:
    • VirtualDirectoryPath
    • AdminSiteBaseURL
    • SSOVirtualDirectoryPath
  • Save the file
  • Restart IIS

You should now be able to browse to http://company/server.pt. It worked for me!

In the case of ALUI 6.5, I was able to do this without worrying about a CF. In the case of ALUI 6.1.0.1 and Plumtree 5.0.4, I never tried removing the /portal virtual directory entirely, but I was able to rename that directory without difficulty so I believe renaming the virtual directory to "/" will not pose a problem.

Finally, as a bonus, let's say you don't like the server.pt extension. On IIS it turns out, you can use anything that ends in .pt. So you might choose to publish your portal's URL as something more apt such as http://company.com/a.pt.

Please let me know how this works for you. These aren't battle tested instructions, and this may not be a "supported" configuration. If you deploy this and have many users hitting the portal on a sustained basis, let me know with a comment. Or if you find my instructions are buggy? Let me know that too.

Enjoy!

Upgrade Presentation: BEA Participate 08

|
Are you a chronic ALUI upgrader? A newbie? I gave a presentation at BEA Participate discussing best practices to make anyone have a more pleasant upgrade experience. Seriously, upgrades don't need to be painful.

You can download my presentation here.

Enjoy.


Validate Your Work During Upgrades

|
You know those upgrade nights when you're working on a dozen servers and performing 50 steps on each of them? It can be easy to miss something.

I've taken to following a test plan during these upgrade activities to double check that we complete every step on every server.

Most steps end up creating a finger print of some sort. For example:

  • Running the ALUI installer changes the timestamp on AquaLogicInteraction_silent.properties to the present
  • Installing a CF changes the timestamp on a DLL or JAR to the date the CF was released
  • Updating the license.bea file changes its timestamp to the present
  • Carefully copying back an i18n file with your customizations changes its timestamp from the default time
  • Changing those various setting for foo results in new values for bar being put in place

And most of those fingerprints can be verified from the command line. You can easily check a timestamp using the dir or ls command. And you can use grep to find a section of a config file.

So what I've done of late is create a series of commands that I can paste into the prompt on a computer to check every server in the upgrade to verify an upgrade activity is complete. I put these into a spreadsheet. The first page allows me to enter the list of servers in my upgrade. The second page generates these verification commands for every server. As the system administrators go through each step of the upgrade, I copy columns from my spreadsheet into a command prompt, and I get a quick visual for every server whether the activity is complete. For example:

grep -1 set.LD_LIBRARY_PATH  \\ALUI-AS-01\plumtree\ptws\6.1\settings\config\wrapper.conf
grep -1 set.LD_LIBRARY_PATH  \\ALUI-IM-01\plumtree\ptws\6.1\settings\config\wrapper.conf
grep -1 set.LD_LIBRARY_PATH  \\ALUI-IM-02\plumtree\ptws\6.1\settings\config\wrapper.conf
grep -1 set.LD_LIBRARY_PATH  \\ALUI-JS-01\plumtree\ptws\6.1\settings\config\wrapper.conf
grep -1 set.LD_LIBRARY_PATH  \\ALUI-SS-01\plumtree\ptws\6.1\settings\config\wrapper.conf

I've done this in several upgrades, and it seems like for about half of those I find something had been missed. I can immediately point this out to the server admin, and we quickly get back on track. It's a very cheap test in terms of time, but it can save hours of debugging trouble at the end of the upgrade.

Take a look at my spreadsheet. You'll want to update it according to your needs, but it should get you started down a path of sweet success.

Enjoy!


Simplify Installation of Critical Fixes

|
Another day, another critical fix. I think the profusion of critical fixes (CFs) from BEA's ALUI support group is a good thing actually because it indicates that they're really working to solve customer problems. Never mind that it also represents that the products have bugs, because all software does.

A CF usually consists of a readme file and one or two DLLs or JARs. The readme describes the issues addressed by the fix and includes instructions on how to install it. The instructions though often have several steps. You can avoid manually doing this work though. Consider these instructions I recently received that are typical:

         
This Critical-Fix is for AquaLogic Interaction 6.1 MP1 Patch 1. This Critical-Fix must be applied to machines that host the AquaLogic Interaction Portal.

To install the Critical-Fix:

  1. Shutdown the Application Server (IIS Admin Service).
  2. Backup the original version of the following files:
      Filename Location
      portaluiinfrastructure.dll %PT_HOME%\ptportal\6.1\webapp\portal\web\bin
      portaluiinfrastructure.dll %PT_HOME%\ptportal\6.1\bin\assemblies
    Note: BEA Systems recommends that you backup original files in a zip file to preserve path information and then save the zip file to a backup folder under the %PT_HOME% folder.
  3. Unzip the AquaLogicInteraction_6.1.1.325722.zip zip file onto the %PT_HOME% folder. Be sure to select the "Use folder names" option to expand all the files into the proper subfolders under the %PT_HOME% root.
  4. Restart the Application Server (IIS).

I am reluctant to use those steps for several reasons:

  • In a large enterprise environment, it's can be tedious to follow those many steps on every portal server in every dev, test, and prod environment. The task grows if you have a few critical fixes that apply to your environment.
  • Often time is of the essence, and anything that can save a minute or two during a system change window is worth doing.
  • Every manual step opens the possibility of variance and error. This is even more of an issue during a late-night maintenance window when people's mental acuity can slide.
  • Often system administrators process requests at their own pace, and I am not there to watch them install a CF. I'm less confident things are installed properly if they have several steps.

I am sure many others would agree with these reservations. So what can we do? Wrap the CF up in a batch file that does all the steps and logs it to a central location. What I do is create a folder on a network share that contains the CF documentation, the CF JARs or DLLs, and a batch file with a name like "install-cf-300253.bat." The batch file looks something like this:

set path_of_new_file="\\fileshare\alui\AquaLogicInteraction_6.1.1.316478"
set cfversion=6.1.1.316478
set logfile=%path_of_new_file%\ran.%cfversion%.%COMPUTERNAME%.log
set aluihome=i:\apps\plumtree
set servicename="w3svc"

@echo on
echo ----------- %date% %time% :: Begin install >> %logfile%
net stop %servicename% >> %logfile%


set thelib=openhttp.dll
echo Backing up previous %thelib% (2 locations)... >> %logfile%
move %aluihome%\ptportal\6.1\bin\assemblies\%thelib% %aluihome%\ptportal\6.1\bin\assemblies\pre.cf_%cfversion%.%thelib%.old   >> %logfile%
move %aluihome%\ptportal\6.1\webapp\portal\web\bin\%thelib% %aluihome%\ptportal\6.1\webapp\portal\web\bin\pre.cf_%cfversion%.%thelib%.old  >> %logfile%
echo Copying in new %thelib% (2 locations)... >> %logfile%
copy %path_of_new_file%\%thelib% %aluihome%\ptportal\6.1\bin\assemblies\%thelib% >> %logfile%
copy %path_of_new_file%\%thelib% %aluihome%\ptportal\6.1\webapp\portal\web\bin\%thelib% >> %logfile%


net start %servicename%  >> %logfile%

@echo off
pause

Then instead of giving the server administrator a long list of instructions, or instead of having error-prone steps on my list of late-night upgrade activities, I just say:

          Open each portal server, and paste the following into the Start->Run window:

\\fileshare\alui\AquaLogicInteraction_6.1.1.316478\install-cf-316478.bat

The script stops the service, backs up the necessary files, copies in the new ones, starts the service again, and logs all this activity to my central fileshare for easy auditing.

It's also easy to adapt this batch file for other CFs. When the CF has more than one DLL or JAR, for example, you can copy the block that begins with "set thelib=openhttp.dll" and modify the library you need. You do the same with the service name (perhaps you are working with the Automation server or Analytics).

Enjoy.

What's in that Patch?

|
With a .NET customer, I recently upgraded ALUI 6.1.0.1 to ALUI 6.1 MP1 Patch 1. The process involved first running the MP1 installer, then the Patch 1 installer. But we encountered a bug in Patch 1 that meant we had to remove that portion of the upgrade.

BEA support suggested we roll back to the files as they had been before the Patch 1 installer, but since we did MP1 and Patch 1 as part of the same activity, we didn't have the backups support would have hoped for. Plus, we weren't sure whether Patch 1 modified config files.

I decided to identify the exact files that were modified to see if they could be easily backed out by just replacing some JARs and DLLs, and indeed, that was the case. My approach might be helpful to others. I did the following:

  1. Install MP1 on a fresh VMWare host
  2. Change the date on the VMWare host to tomorrow's date.
  3. Set the timestamp on every file in the c:\bea\alui directory to the date of the VMWare host. I did this with the following command using unix-esque executables from unxutils:
    c:\add2path\find.exe c:\bea\alui -exec touch {} ;
  4. Backup the c:\bea\alui directory as c:\bea\alui-mp1
  5. Restore the date on the VMWare host to the correct date
  6. Install MP1 Patch 1. This will result in timestamps of today or earlier for any file placed on the server by Patch 1.
  7. Search for all files modified anytime up until today but not including tomorrow
  8. Delete from c:\bea\alui-mp1 every file except those on the list of modified files
  9. We were only concerned with rolling back Patch 1 on the portal servers, so I made a zip file of the remaining contents of c:\bea\alui-mp1\ptportal. I named the zip file ptportal-remove-patch1.zip

We then unzipped ptportal-remove-patch1.zip over the ptportal directory on the customer servers with good results. The following is the list of files that were installed or modified with Patch 1. Where the full path is not given, the file was under c:\bea\alui.

But wait, you say. Why did you mess around with changing the timestamps on the files? Why didn't you just backup the MP1 directory, then install Patch 1 and compare the two directories? The concern there is that it's hard to tell whether a config file is identical both before and after Patch 1 is the same because it wasn't touched (good information) or because it was overwritten with an identical file that had the same timestamp (confusing). And if that file that is placed identically by the installers needs some modifications for your environment, then you really need to know whether the installer would overwrite your changes.

If you ever have the situation were you need to know after the fact what changed, then you might take this approach.

Enjoy,

Bill

Configuring ALUI when SQL Server Uses a Named Instance

|
Often people are confused by how to configure their ALUI portal to run with a SQL Server configured to run with a named instance. This post will quickly review what this database configuration is, then it will explain how to run the portal with such a configuration.

MSSQL 2000 and 2005 allow multiple instances of the database to run on the same installation. When you create these instances, you distinguish them by a name that lets you refer to them in a human-centric way and a port number that is used across the network in a computer-centric way. For example, I might create a SQL Server instance named stagingdb to run on port 2048.

You can see how your server is configured as follows:

  • Go to Programs -> Microsoft SQL Server 2005 -> Configuration Tools -> SQL Server Configuration Manager.
  • Within that tool, go to SQL Server 2005 Network Configuration -> Protocols for {instance-name} -> TCP/IP Properties
  • Go to the IP Addresses panel, then scroll down to the TCP Dynamic Ports to see what port number corresponds to the instance.

 named-instance-config

Microsoft tools such as Query Analyzer and SQL Server Management Studio let you connect to this in a friendly way, for example, I can specify that I want to connect to the database "mymachine\stagingdb," then behind the scenes it will connect on port 2048 to that instance.

However, if I have a Java application that is not aware of the Microsoft concept of named instances, then I will need to connect to it using the database "mymachine" and the port "2048."

So what about your portal? When you go through the Portal Configuration Manager (which has the same UI for Java and for .NET portals), the common UI doesn't allow you refer to your database using the instance name. You've got to drop the name and just use the port. For example, on my machine, I have this:

 portal-config-named-instance

Notice that even though the Portal Configuration Manager prompts me with the default MSSQL port of 1433, I put in 1555. Also notice that even though my instance happens to be "localhost\SQLEXPRESS," I refer to it here as simply "localhost."

How to Revive a Failed Search Node

|
I recently posted about how to restore a search cluster that has failed. Step 1 was to make sure all the nodes are running. But what if one of the nodes won't start? What do you do when a node itself is corrupted?

Ironically, you can't use the tools to reset a node if the node is broken and won't start. But you can go on the file system and clean it up with these steps--though make sure the node really is stopped before you do this.

In shorthand for the Unix-minded:

set search_home=d:\bea\alui\ptsearchserver\6.1
set node=alui-ss-0101
rm -rf %search_home%\%node%\index\*
mkdir %search_home%\%node%\index\1
echo 1 > %search_home%\%node%\index\ready
cd %search_home%\%node%\index\1
..\..\..\bin\native\emptyarchive lexicon archive
..\..\..\bin\native\emptyarchive spell spell

Or more prosaically for those who prefer Windows and a mouse:

  1. Go into the index folder of the node. On my machine, it's d:\bea\alui\ptsearchserver\6.1\bbenac0201\index.
  2. The index folder will contain a folder named 1. Open this folder and delete all its contents.
  3. The index folder may also contain a folder named 2. If this is the case, delete the folder 2.
  4. The index folder should contain a file named "ready." Open it and make sure it contains just the number 1. This "ready" file tells the node to look within folder 1 for its content.
  5. Open a command prompt within the folder 1.
  6. Run these commands, the first of which should create about 8 files, the second of which should create about 7:
    ..\..\..\bin\native\emptyarchive lexicon archive
    ..\..\..\bin\native\emptyarchive spell spell
At this point, you should be able to start your node. It will contact the search cluster and populate itself with the current search index.
Ever have problems with your search cluster getting corrupted? One way to fix it is to wipe it out and reindex everything, but that may take 24 hours. A better approach is to use the feature of scheduling checkpoints to backup your cluster daily, then restoring that checkpoint if corruption occurs.

If your ALUI system isn't set to do daily checkpoints, configure them as follows:

  1. Browse to the portal's admin section, then select the utility "Search Cluster Manager."
  2. In the left navigation, select "Checkpoint Manager."
  3. On the far right, click "Initiate Checkpoint" to open the Checkpoint Scheduler.
  4. Select the "Scheduled" radio button, select today's date, set a time, and set it to repeat every 1 day. Click OK.
  5. After the Checkpoint Scheduler closes, you'll see in the "Checkpoint Activity Status" section when the next scheduled checkpoint will run.
  6. Click "Finish" and your system will then backup your search index daily into checkpoints.

If at some point you realize your cluster is corrupt and you need to restore it, and you've been creating checkpoints periodically, then:

  1. Makes sure all the nodes in the search cluster are started. If one of the nodes won't start, you might want to use these instructions to revive it.
  2. Browse to the portal's admin section, then select the utility "Search Cluster Manager."
  3. In the left navigation, select "Checkpoint Manager."
  4. In the center of the screen you'll see a list of checkpoints. The most recent ones will show themselves as "Available" in the last column.
  5. Click on the checkpoint you want to restore. Its row will change from white to light green.
  6. Click the "Restore Checkpoint" button on the right side of the screen below the list of checkpoints.
  7. Watch the "Checkpoint Activity Status" section to see status. Use the refresh button at the top of the screen to update status. It may show messages like "Node pswwwlab-0301 completed copying - 0%."
  8. When it completes, you'll see in the "Named Restore Status" area something like this:
    Cluster is currently in a named restore state.
    Cluster restored from c:\\bea\\alui\\ptsearchserver\\6.1\\cluster\checkpoints\0_1_0.
    The named restore state - SUCCEEDED.
    The named restore must either be discarded or committed.
  9. Finally, you must "Commit" the checkpoint by using the "Commit" button at the bottom right of the screen.
  10. At this point the search cluster will be restored.

Note that by default, the most recent three search checkpoints are stored. So on the fourth day, the first checkpoint gets deleted. In some cases, you may prefer to have more or fewer checkpoints. If this is the case, then edit the cluster.ini file in your search cluster. Add the following new parameter to set the desired number of saved checkpoints, in this case, 2:

RF_NUM_CHECKPOINTS_SAVED=2

If you're lucky, you'll never need to restore a checkpoint. But you'd better be prepared. So make sure you've set this up!
A customer I work with divides responsibilities for their ALUI system between many distinct roles without many overlapping abilities. The DBAs, server administrators, and portal administrators aren't allowed to cross to the other person's zone. So if the portal administrator who has no direct access to the server realizes the search server is hung, then that person needs to coordinate with the server administrator to restart it.

Such division of labor has its strengths, but in our case, the server administrator trusts the judgment of the portal administrator when it comes to restarting services--at least, in the development environment. So we decided to cut out the middle man.

We did this through the portal's external operation feature. The portal tells you "An external operation is used to execute batch files, shell scripts, or other system programs from the portal. Common examples of external operations include running shell scripts which can perform document queries, portal pings, or e-mail snapshot query results to subscribed users. Once created, these actions are scheduled as portal job operations."

Set Proper Cross-Server Security
In the case that your automation service runs on a different machine than the other service that you want to control, you'll need to make sure to set up the security properly. External operations run as whatever credential it is that runs the automation service. By default that runs as the local system account. If you keep this setting, then you need to give the server's machine account rights to access and restart services on the target boxes. If the server is WEB01, then the way to assign it rights on the target machine is through the account name WEB01$ with a dollar sign after its name.You may prefer though to run the automation service as a domain-based service account that has elevated privileges across all servers. In such a case, no additional machine-based security will need to be applied.

Create the Batch File
To start we had to make sure we had the tools that we intended to use: sc.exe and sleep.exe. Sc lets you control services on local and remote servers. Sleep lets you wait between commands. Some of the latest Windows servers install these tools by default. If your server doesn't have them, then install the Windows Server Resource kit.

Then we wrote a batch file named restart.aluidev-03.service.bat.

set TargetService=SearchClusterUI
set ServiceHost=aluidev-03
set PathToSC=c:\WINNT\system32
set PathToSleep=e:\progra~1\Reskit

set LogFile=restart.service.log

echo %date% %time% Entering sequence to restart %TargetService% >> %LogFile%
%PathToSC%\sc.exe \\%ServiceHost% stop %TargetService%
%PathToSleep%\sleep.exe 10
%PathToSC%\sc.exe \\%ServiceHost% start %TargetService%
%PathToSleep%\sleep.exe 30
%PathToSC%\sc.exe \\%ServiceHost% start %TargetService%
%PathToSleep%\sleep.exe 60
%PathToSC%\sc.exe \\%ServiceHost% start %TargetService%

echo %date% %time% Restarted %TargetService% >> %LogFile%

The script restarts the BEA ALI Search Cluster Manager service on the aluidev-03 server. A few comments on it:

  • The sc command requires that you use the service name rather than the display name. So we went to the Services console, looked at the properties of the desired service, and we found the service name is SearchClusterUI.
  • We call the sc and sleep commands using fully qualified paths because the portal external opration won't have the Path environment variables.
  • After stopping, we try starting several times. The reason for this is that some services take longer than others to stop. If the service stops quickly, then our first start command will restart it. But if it stops slowly, we may not be able to start it on the first couple attempts. But after a total of 100 seconds? It should be ready to start.
  • For auditing purposes, we log each time this batch file is used.

We select one of the ALUI automation servers as the host for our script and place the batch file in its %pthome%\ptportal\6.1\scripts directory. After we test the script and see that it works, we're ready to move on.

Create the Portal Objects
In the portal we now create an External Operation. We set the command as follows:

".\restart.aluidev-03.service.bat"

We then schedule this External Operation via a Job, and then we make sure the folder for that Job object is registered in the Automation Service to be run by the automation server with our batch file.

We also make absolutely sure the security on the External Operation and Job are set such that only administrators can use them.

Can the Parameters be Passed to the Batch File?
The server administrator realized that using the above steps, he would need to create a separate batch file for every service on every server. Sounds like a pain, so couldn't he just pass the parameters from the External Operation to the batch file? Technically, yes, this is possible. But it can get risky. You probably don't want the portal administrator to be able to arbitrarily restart any service on those other machines in your rack.

But you might allow them to just restart services on a specific server. This might be reasonable if you have a dedicated development server. Anyway, to do that you would first edit your External Operation to pass in a parameter of the desired service name:

".\restart.aluidev-03.service.bat" "SearchClusterUI"

Then you would edit the previous batch file so that it takes its service name from the parameter. The batch file would then begin with this:

set TargetService=$1

Enjoy, and make sure to use this power judiciously!

Deploying Portlets with the Server API

|

Portlet developers write code that integrates to the portal with varying depth. Let's not consider the simplest side of the spectrum where a pagelet has no ALUI-specific code and is only a portlet because the portal registered it. Examples of these are the Google Gadgets I previously wrote about.

We'll consider only portlet integrations that use an API to leverage ALUI features. But API, an acronym that stands for Application Programming Interface, needs some disambiguation since ALUI uses it in at least three ways related to portlets.

[1] First, there's the "BEA ALI API Service," a component installed on your ALUI system. Some portlets and other applications send SOAP requests through this service to get information into and out of the portal.

[2] Next is the AquaLogic IDK API. This used to be called the EDK, or just the remote API. A portlet can include the libraries for this API in its bin or lib directory. Such a portlet may run on a network external to the core portal, and can even be hosted by a third-party. These IDK libraries pass information through HTTP headers that only make sense in the request and response context of a portal. Portlets using the IDK can do things like get and set preferences from the portal database. Some IDK callls use the API service for deeper portal interaction such as creating certain types of portal objects. Usually developers require nothing more than the IDK to write their portlets.

A request from the portal to a portlet may include the following headers, which illustrates details available to the IDK:

CSP-Protocol-Version: 1.3
CSP-Can-Set: Gadget-User,User
CSP-Gateway-Specific-Config: PT-User-Name=Administrator,PT-User-ID=1,PT-Stylesheet-URI=http://www.a.com/imageserver/plumtree/common/public/css/mainstyle-en.css,PT-Hostpage-URI=http%u003A%u002F%u002Flocalhost%u002Fportal%u002Fserver%u002Ept%u003Fopen%u003Dspace%u0026name%u003DMyPage%u0026id%u003D9%u0026cached%u003Dtrue%u0026in%u005Fhi%u005Fuserid%u003D1%u0026control%u003DSetPage%u0026PageID%u003D208%u0026,PT-Community-ID=0,PT-Gadget-ID=226,PT-Gateway-Version=2.5,PT-Content-Mode=1,PT-Return-URI=http://localhost/portal/server.pt/gateway/PTARGS_16_0_0_0_43/a?b=c&,PT-Imageserver-URI=http://www.a.com/imageserver/,PT-User-Charset=UTF-8,PT-SOAP-API-URI=http://bbenac01:11905/ptapi/services/QueryInterfaceAPI,PT-Portal-UUID={bcdd12067bd44a26-10ff664aba20},PT-Class-ID=43,PT-Guest-User=0
CSP-Aggregation-Mode: Multiple
CSP-Global-Gadget-Pref: MaxObjectsToExamine=100

A response might have something like the following, sending a command to the portal about how the portlet should display (a very simple example):

CSP-Display-Mode: Hosted

[3] Finally, we have the core server API. This is the full library of DLLs or JARs that installs on a server that hosts components such as the ALUI Portal, Admin Portal, API Service, or Automation Service. The server API runs the core portal features, and developers use it for their portlets when they require more than what the IDK offers. Such portlets might create and delete users, audit portlet objects, or modify experience definitions.

Deploying Server-API Portlets

A portlet or other application that uses the core server API needs can easily be deployed on the same server as an ALUI portal or admin portal server since that box has all the server API libraries. It should work just as well to install the server API portlet on an automation server, but I recently found it didn't. Why not?

It turns out the ALUI 6.1.0.1 installer (that I was testing with) chooses not to drop some important files on the server when you install just the Automation Server component. This is a small annoyance easily overcome. At least on Windows, these are the required steps:

1. Make sure your Automation Server is installed and running properly.
2. Copy plumtree\ptportal\6.0\bin\assemblies from a portal server onto the automation server
3. Edit plumtree\settings\common\serverconfig.xml so that the adonet-license-file-directory value is the path to that assemblies directory.

For your Java portlets that use the server API? There's a good chance that they'll run on an Automation Server without needing any extra steps. The reason is that first, the Automation Server always runs on Java, and it already installs \ptportal\6.1\lib\java, which has the same libraries that the missing assemblies directory has. Second, the serverconfig.xml value for adonet-license-file-directory only applies to ADO connections.

My untested bet is that a machine with just the API service would behave like one with just the Automation Server component. The API service is always in Java.

Deploying Server-API Portlets on a Portlet Server

Some customers want to keep their portlets running on their portlet servers rather than moving them onto the portal server or the automation server. That's a fine idea, and it's accomplished by installing a core component on the portlet server. You can disable the service for that core component, and so essentially, this lets your portlet server run server API portlets without the overhead of that core server component.

Replicating the ALUI Grid Search Index

|
The ALUI search component underwent a major redesign for the 6.1.0.0 release. In earlier versions, the indexing component of search was a single point of failure. The querying component though could be made highly available by replicating its index to secondary servers. Release 6.1 brought the important improvement of allowing both indexing and querying to be redundant. A customer can install two search servers to act as nodes of the same partition, and the index takes care of itself.

So there is no longer a need to write special scripts to replicate the 6.1 search index, right? Not if you are just trying to make your live index redundant. So the 6.1 product dropped the old "replicate" tool.

But larger customers have a use case that still requires index replication. If the customer has a failover datacenter for use when the primary system is unavailable for one reason or another, then the customer needs to replicate the search index to that site, and how do you this? It used to be very easy. Consider the following that did the job on ALUI 6.0 (in this case on RHEL). It was two easy commands:

    SEARCHHOME=/usr/local/plumtree/ptsearchserver/6.0
    
    echo ----------- copy the master search index to a backup directory
    $SEARCHHOME/bin/replicate -incr_backup aqlsearch 15244 $SEARCHHOME/indexmaster $SEARCHHOME/incr_backup
    
    echo ----------- restore the backup index to the failover search server
    $SEARCHHOME/bin/replicate -restore aqlsearchfail 15244 $SEARCHHOME/index $SEARCHHOME/incr_backup
    
      You can still attain the same result in the ALUI 6.1 releases, but hold onto your hat. It's a wild ride.

      • Copy the %searchcluster%\checkpoints and %searchcluster%\requests folders from the origin server to a temporary directory on the destination server
      • Make sure all search nodes and services are running properly
      • Run the following command to empty the checkpoints folder and set the requests folder to only have an indexQueue.segment:

               %searchhome%\bin\native\cadmin.exe purge --remove

      • Stop all nodes and services
      • Copy the origin checkpoints and requests folders over their corresponding destination folders, including the indexQueue.segement:
      • Replace the cluster.nodes file(s) in the checkpoints folder(s) with the one from destination cluster base. For example, copy %searchcluster%\cluster.nodes over %searchcluster%\checkpoints\0_106_30976\cluster.nodes
      • Start all nodes and services. The nodes will restore from the checkpoint. If you have a large index, the nodes may take a while to start (five minutes for 1.2 million objects). If you have multiple nodes, they may take another several minutes to move from stall/recover.

    Other migration methods may have trouble migrating checkpoints from a system with low-numbered checkpoint folders to a destination system with higher-numbered folders. This method however has no such problem.

    These steps have been tested only on systems that use a single partition. The number of nodes in the partition is not significant; these steps have been tested when restoring the cluster to destination systems with different numbers of nodes than the origin.

    The directories can be copied from the source while all services are up, and this should not cause synchronization issues later during the restore. So if you take a checkpoint, then later make several changes to the index, then later copy the checkpoints and requests folders to the destination, the destination will have all the changes in its restored index, including those made after the checkpoint.

    This process was created with significant input from Dax Farhang, the product manager responsible for the search product. If we're lucky, he'll cook this feature into the 6.5 ALUI product so that this post will become obsolete. In the meantime, enjoy.

    Easily Configure Grid Search Load Balancing

    |

    The reality is load balancing can be confusing. Some applications require sticky load balancing (like the portal), some require shared file system access (like the document repository), some require an external load balancer (like the image server), and some are load balanced internally by the portal (like most custom portlets). So not surprisingly, some customers approach load balancing with trepidation, or perhaps they just expect implementing it will require mad voodoo engineering skills.

    Enter Grid Search. The 6.1 version of the search server was rewritten with a raft of design changes including improved scalability. Previously it didn't fully scale. Now it has a tool called the Search Cluster Manager, which tells the story, right?

    Actually, though the Search Cluster Manager's name correctly says "we can cluster," it gives many people a false idea about how clustering works. Administrators use the Search Cluster Manager to add a new search node to the cluster, but that's it. The Search Cluster Manager isn't required at runtime for end users who need their queries managed across the multiple nodes in the cluster. This component would be better named something like "Search Cluster Administration." Those who are used to configuring components to sit behind load balancers frequently expect the portal needs to connect to the cluster manager to serve content from the nodes.

    In fact, when an administrator installs a new search node and adds it to the cluster through the Search Cluster Manager, the product goes on auto-pilot, and the load balancing is done. When the portal had only one search node, it was configured in the Search Server Manager to use that single node as the search cluster contact node. The magic is that after adding a second node, the contact node doesn't need to change. The search server is smart enough to notify the portal of any other node participating in the cluster. So the portal knows all the servers toward which it can distribute search traffic.

    When you click the Search Service Manager's "Show Status" button, you can verify that it knows of all your nodes even though it's configured to use just one of them as the Cluster Contact Node. Note that at the bottom of the page, each node is listed:

    searchstatus

    You can read a full document I put together that several people have told me provides useful enhancement to the standard documentation. It discusses installation, creation of nodes, the rationale behind your choices as you do this, and a little bit on load balancing. Feel free to download the file.


    Using Google Portlets in ALUI

    |
    I was asked to take a look at the Google-powered "gadgets" that can quickly be added to an ALUI portal as portlets.

    My first response is that I like Google's terminology. Ahh, the good old days before Plumtree changed its jargon to the "industry standard" of "portlets" instead of the arguably better "gadgets."

    And functionally? Google's offerings are standard JavaScript-driven syndicated gadgets. Basically, they provide content you can display within your own portal. You can go to the Google gadgets website, read up on the concept, then click into the list of gadgets to see whether any are appropriate for you. The selection is great. Some are pure fun, designed for your blogging teenage sibling or child, such as the Tetris game. Others are likely candidates for a company's intranet, such as the weather and stock quotes portlets. Here's the (depressed) BEA stock quote as it displayed in my portal:

    stock.quote

    It's very easy to add this stuff to your portal. You pick the content you want, fill out some parameters for the desired size and colors, maybe a zip code for a weather portlet or a stock symbol, then click to get the HTML that you'll use for your portal to grab the content. The HTML string they provide is a single line. The format will be something like this:

    <script src="http://gmodules.com/ig/ifr?url=http://content.provider.com/file.xml&various_parameters;output=js"></script>

    If you want to get content into your portal quickly, this content could be very attractive to you.

    On the other hand, the Google solution isn't perfect. For example, the content displays in your portal with its own stylesheet information, so it's likely that your portal stylesheet won't match the Google content's stylesheet. Portal managers are all over the map as far as how much they care about consistent look and feel. Some will find this anathema, and others won't even notice. Take a look at this, for example, where neither the font nor color of the Google portlet matches my portal:

    flightinfo

    Also, you may find it annoying that users are one click away from leaving your portal if you use these. You can't keep the clicked content within your portal, like you would with a custom portlet. For example, if you submit flight details into the portlet above, you'll go to the FlightStat.com website outside of the portal. Even if you were to set the URL of the content-provider as a gateway URL prefix for the web service and specified that gatewayed content should display under the portal banner, you still wouldn't be able to affect this behavior. That's because the content loads up directly by the user's browser via JavaScript instead of getting passed through the portal server where it would have its URLs manipulated based on those web service settings.

    It's important to realize that Google's gadgets provide just one example of what many sites are doing: providing content that you can stick in your portal. Google is great because you can be confident it won't go away, and it's a magnet for content providers. But there are other options out there:

    Moreover.com, which many years ago started providing news content this way, still lives. They provide great options for customizing presentation, since instead of just giving a single line to describe the content, they give you many dozens of lines that include their stylesheet definition, whether to include dates, and so forth. See how nicely the New York Times headlines appear in my portal:

     nytimes

    Also, many of the originators of content let you sign up to get a snippet of code to create a portlet. Weather.com, for example, offers it here.

    And then there are vast catalogs, filled with junk, but with jewels too, such as what you find at widgetbox.com or directory.snipperoo.com. I was feeling cheery, so I grabbed a nice terror alert portlet from widgetbox:

     terror.alert

    Anyway, the point is that this content is abundant. And how do you put this stuff into your ALUI portal exactly? Any customer can follow these steps:

    1. Create an html file such as stockquote.html that contains the code snippet from Google (or whomever).
    2. Place that file on a webserver of your choosing. You might drop in on your image server (I know you have one) in %pt_home%\ptimages\imageserver\RemoteGadgets.
    3. Browse to the page to make sure you have the URL right: http://myserver.company.com/imageserver/RemoteGadgets/stockquote.html
    4. Create a new web service, selecting a remote server object that points to your web server and entering the URL of your portlet:
       portlet.reg
    5. Create a new portlet based on your web service.
    6. Add the new portlet to your MyPage or Community page.

    The folks who suggested I post about Google's gadgets? They gave me screenshots showing how this can be even easier with the ALUI Publisher product. The nice thing there is that you can use Publisher's Announcement template to immediately create a portlet that takes arbitrary HTML content (your Google gadget snippet) without requiring you to put anything on the file system as in my previous instructions. It's basically driven by portlet preferences, and if you know much about those, then you know that you can create this same affect with a custom-written preferences portlet that won't require you to purchase a big product like Publisher. Here are those Publisher screenshots, and you can imagine how this would work with your own prefs portlet too:

    • Create a new Announcement portlet in the portal (AquaLogic Interaction). In the administrative preferences for that portlet toggle to the source view:

       publisher.create

    • Insert the tag and click finish:

       publisher.edit

    • Add the portlet to your page.

    Enjoy.

    Using ALUI URLMappings, especially for debugging

    |
    Often ALUI customers deploy portals accessed through a single URL that doesn't go directly to the portal servers. This may be the URL of an SSL accelerator, a load balancer, or a proxy. Unless the customer deploys their portal with an extra URLMapping or two, then portal administrators or IT staff may find themselves inconvenienced in that they can only access the portal through the single public URL. It can be much easier to debug load balancer problems or troubleshoot an SSL portal if the admins can access the system through administrative URLs. The portal's URLMapping feature can allow this.

    This article will explain the URLMapping feature, show a common but limiting configuration with proxies or SSO, and show alternate configurations that are more flexible.

    What are URLMappings?
    The portal needs to know how to write its HTML so that links go to URLs that make sense to the end user. In the simplest portal configuration, users browse directly to the portal (http://simple/portal/server.pt), and the portal returns a page with links that continue to use http://simple/portal/server.pt. No mapping required. But consider a more advanced configuration that involves a load balancer. Users browse to http://public/portal/server.pt, then the load balancer fowards traffic to http://serverA/portal/server.pt and http://serverB/portal/server.pt. The portals needs to return HTML to the end user with the base of http://public instead of http://serverA. URLMappings instruct the portal how to do this.

    How URLMappings Work
    URLMappings are configured in \plumtree\settings\portal\portalconfig.xml or \plumtree\ptportal\5.0\settings\config\x_config.xml. You can create as many URLMappings on a system as you'd like. Each mapping has three elements:

    • URLFromRequest is the URL the portal sees in the incoming request. This isn't necessarily what the user typed in, such as when the portal is behind a load balancer or proxy. The portal tries to match the incoming request to the URLFromRequest value, and if it finds a match, then it will map according to the values found in the next two elements. You can look in PTSpy for a message like this one to know what the portal sees: Entering handleRequest: GET http://serverA:80/portal/server.pt.
    • ApplicationURL is the base URL the portal will use to rewrite links for HTTP traffic.
    • SecureApplicationURL0 is the base URL the portal will use to rewrite links for SSL traffic.

    The portal evaluates each URLMapping in its config file, and once it finds a matching URLFromRequest, it can rewrite URLs. The portal must find a match though, and so you must make the final URLFromRequest setting an asterisk to handle any case not matched by prior URLMapping rules (an IP address, for example).

    Common Configurations
    I show examples in the XML format used by 5.x portals since visually its more concise than the 6.x format. The concept is the same in both versions though. By default, the URLMapping section looks like this in 5.x portals:

    <URLFromRequest0 value="*" /> 
    <ApplicationURL0 value="*" /> 
    <SecureApplicationURL0 value="*" /> 
    
    That is essentially a pass-through mapping. Users can go to http://simple/portal/server.pt or https://simple.fully.qualified/portal/server.pt, and the portal will write links to keep them on the same base they first entered.

    With a load balancer, the URLMapping section could be as simple as this in the case that the user types in http://public to get to the load balancer, then the load balancer forwards traffic to http://serverA:

    <URLFromRequest0 value="*" />  
    <ApplicationURL0 value="http://public/portal/server.pt" />  
    <SecureApplicationURL0 value="https://public/portal/server.pt" /> 
    

    Advanced Configurations
    Many customers don't go further than the above mappings. But lets say you have http://serverA and http://serverB behind your load balancer, and you want to be able to browse to a specific server. You could use the following URL mapping to do this. It tests whether the user requested a specific machine. If they did, they'll stay on that machine. Otherwise, they'll use the load balaner's URL:

    <URLFromRequest0 value="http://serverA/portal/server.pt" />  
    <ApplicationURL0 value="http://serverA/portal/server.pt" />  
    <SecureApplicationURL0 value="https://serverA/portal/server.pt" /> 
    
    <URLFromRequest1 value="http://serverB/portal/server.pt" /> 
    <ApplicationURL1 value="http://serverB/portal/server.pt" /> 
    <SecureApplicationURL1 value="https://serverB/portal/server.pt" /> 
    
    <URLFromRequest2 value="*" />  
    <ApplicationURL2 value="http://public/portal/server.pt" />  
    <SecureApplicationURL2 value="https://public/portal/server.pt" /> 
    
    It's also possible that for security purposes, you want to require all traffic to go through an URL protected by an SSO product or through a proxy. How can your system administrator debug, such as when you think a problem is caused by your proxy? You could configure an URLMapping so that when the system administrator logs into the server console, he or she can then access the portal at http://localhost/portal/server.pt:
    <URLFromRequest0 value="http://localhost/portal/server.pt" />  
    <ApplicationURL0 value="http://localhost/portal/server.pt" />  
    <SecureApplicationURL0 value="https://localhost/portal/server.pt" /> 
    
    <URLFromRequest1 value="*" /> 
    <ApplicationURL1 value="http://public/portal/server.pt" /> 
    <SecureApplicationURL1 value="https://public/portal/server.pt" /> 
    
    There are other ways to put URLMappings to work for you, but this discussion should give you the base understanding required to get started.
    And a Bug
    [Added July 23, 2007]

    It turns out there's a bug. Basically, gatewayed portlet content has URLMapping rules applied to it twice. So let's say initial configuration were this:

    URLMapping0: http://internal1/portal/server.pt -> http://proxy1/portal/server.pt
    URLMapping1: http://internal2/portal/server.pt -> http://proxy2/portal/server.pt
    URLMapping3: http://internal3/portal/server.pt -> http://proxy3/portal/server.pt
    URLMapping4: * -> http://everything.else/portal/server.pt
    

    So a user browses to their My Page, and the URL for links to portal infrastructure such as other My Pages or Communities would propertly rewrite from http://internal1/portal/server.pt to http://proxy1/portal/server.pt. But within the gatewayed content of the portlet, a second rewriting would occur. In this case it would rewrite http://proxy1/portal/server.pt (which matches to *) to be http://everything.else/portal/server.pt.

    The workaround isn't too hard. All you need to do is have a URL mapping section that matches the external URL and says "when this URL is encountered, leave it alone."

    You would add mappings such that:

    URLMapping0: http://internal1/portal/server.pt -> http://proxy1/portal/server.pt
    URLMapping1: http://internal2/portal/server.pt -> http://proxy2/portal/server.pt
    URLMapping3: http://internal3/portal/server.pt -> http://proxy3/portal/server.pt
    URLMapping4: http://proxy1/portal/server.pt -> http://proxy1/portal/server.pt
    URLMapping5: http://proxy2/portal/server.pt -> http://proxy2/portal/server.pt
    URLMapping6: http://proxy3/portal/server.pt -> http://proxy3/portal/server.pt
    URLMapping7: * -> http://everything.else/portal/server.pt
    

    Find recent content on the main index or look in the archives to find all content.