Ironman Santa Rosa

| | Comments (0)
imfamily.jpg
Ironman Santa Rosa is in the books. It's quite a production to work the logistics for these races. I'm grateful Kelsey let me have the indulgence. My time was 12:17:16 overall, with 1:30:10 for the swim, 5:59:41 for the bike, 4:25:58 for the run, and the rest for transitions. I was the 483rd finisher among 1913 starters, about a quarter of the way from the top in the rankings, which I feel good about. Here's how it all went down.

The night before the race instead of sleeping at the hotel with my family and a room full of distractions, I managed to get a quiet room in the home of local friends Paul and Kari Jones. I spent an hour chatting with their family in the evening, drifted to sleep before 10, and woke up at 3:30 a bit before my alarm. Paul left a supportive note on the front door for me. I brought the car back to the hotel so Kelsey would have it as she entertained the kids during the race, and I picked up my gear. But I made a flawed last-minute decision: leave my wetsuit behind. The rules allow wetsuits in water up to 76.1 degrees. The day before they announced the water was 76.4 degrees. Since the water generally gets warmer each day through the summer, I figured there was no way wetsuits would be allowed, and why bother using scarce space in my bag to haul mine around?

 
wall-to-wall-wetsuits.jpg
I headed for the shuttles that would take us to the swim start. They left in the hour of 4am from downtown Santa Rosa. The word on the bus was that the water temperature had been called at 76.1 degrees, so wetsuits would be legal. I was skeptical of the rumor. But an hour later once we got dropped at Lake Sonoma for the swim, the fact was clear. It was a wetsuit race. Either the temperature reading was done differently from the day before, or the number was fudged. So nearly everyone at the lake was in their speed-enhancing wetsuit. We lined up according to expected finish times, with the fastest swimmers in front so we would be less likely to be climbing over each other in the water. I jumped in around the 1h30m sign. As I approached the water, announcer Mike Reilly commented about me, "No wetsuit, no shave!" True enough.

lake.jpg
imswim.JPG
The swim was a pleasure, despite my relatively bad performance (1157 out of 1913 overall for 40th percentile). The venue was as fine as I could have hoped for. Lake Sonoma with steam rising toward the angled light of sunrise was gorgeous. The water was clean, and its temperature perfect. My 1:30:10 time hit my target to the minute. I had worried based on bad training swims that I might actually get pulled from the course without finishing by the 2:20 limit. That obviously wasn't a legitimate concern. But more threatening was the likelihood that I would have stomach pain. Frequently after swims exceeding about 45 minutes, my stomach cramps in a debilitating way that leaves me in a fetal position for a few hours and seemingly unable to eat. Pain is an inconvenience, but nutrition is essential in a race. Anticipating that I might not be able to eat afterward, I downed 100 calories of Gu while walking into the water. And after the first of two laps when we exited the water to cross a timing mat, I downed another Gu that I had carried in a pocket. And arriving at transition, I drank a 220 calorie Ensure. So I was 420 calories in before I had a chance to get to the crucial post-swim time when the pain often arrives. But this day it never did.

imbike.JPG
My bike segment felt unremarkable, but I'll remark all the same. My position was 542 out of 1913 for 28th percentile. The course wound its way from the lake through bucolic vineyards and forests down to the city of Santa Rosa. The course measured short on my bike computer at about 110.5 miles, and that seems consistent with what others reported in a Facebook group about the race. The elevation gain was about 3000 feet over the ride, according to several people's GPS tracks. The first half was lovely. And because I had started the bike late due to my slow swim, I passed what seemed like hundreds of riders during this first half of the ride for whom biking was weaker than swimming. I was very focused on nutrition, counting calories obsessively. But it was hard. About an hour into the bike, I was looking down at the Gatorade bottle lying in the cage between my handle bars and essentially yelling at it that I had to pick it up and drink it. I needed will power to force myself to eat. Over the first three hours, I managed to eat 1200 calories, and I felt pretty strong. But then my heartrate and speed began to fade. Perhaps around mile 70 I started to feel delirious, and I began to wonder about crashing and laying down my bike. My eyes drooped. I'm sure my reflexes were slow. I realized this was dangerous and tried to shake myself out of it. At an aid station I grabbed a 200 calorie package of Clif Bloks, which I downed in one shot and I think it helped. I eventually recovered and felt strong from mile 85 or so. The final 50 miles of the course were a couple laps close to the city with tighter traffic and some stretches of rough road. Overall, the course was within my expectations with its mix of smooth and rough surfaces, although many people had big complaints about the rougher stretches of road. The one thing I didn't expect was that as I bent over my bike, my shirt would slide slightly above my pants leaving a strip of skin to be baked by the sun for nearly six hours...

imrun.JPG
Once I hit the run, I felt giddy. Even before the start of the race, I knew that running was all I wanted to do, and I was questioning why I even signed up for a race that would require most of my time be spent doing something else. I was pleased with my performance, which gave me position 395 out of 1913 for 21st percentile. I held my heart rate around my target of 160 for the first two hours (and I hit the half marathon mark at 2:02:28 with a 9:22 pace), but then the third hour my heart rate sagged to around 150 which coincided with slower running. Around the start of the fourth hour and at the 20 mile mark, my body crashed. I got delirious, decided to walk for half a mile, could hardly keep my eyes open, and was weaving a bit (so much so that a couple times people checked to see if I was okay). That fourth hour my heart rate averaged 130. I picked up in the final fifteen minutes or so, returning to about 150. I'm sure the hours of the day had taken their toll, but I think nutrition was a factor too, and probably my only mistake other than leaving my wetsuit. I didn't have a plan, and I couldn't keep track of calories on this segment. I hit every aid station, alternating between drinking water and Gatorade and grabbing random pieces of fruit, then toward the end mixing in Red Bull and Coke. But I had no idea how many calories were in two inches of this, or an inch and a half of that. I drank a 220 calorie bottle of Ensure at transition when the run started, and I carried another bottle of the stuff to the third mile marker where I stashed it to be fortuitously retrieved just after mile 20. I also grabbed a bottle from my special needs bag about mile 17. A surprise from the run was that I didn't manage to have much conversation with other runners, as my speed rarely coincided with an adjacent person for more even as long as a minute. I spoke briefly with a speed-walking woman who happened to work for Brooks and seemed somewhat stunned to see me wearing antique Brooks Launch shoes, released in 2012. Kelsey and the kids came to the course to watch for a bit, and used the nicely done athlete tracker to anticipate where they would find me. On my first pass, I high fived them all, and then perhaps 15 minutes later after a turnaround came back to give them all hugs and love. They were in the finish chute too, cheering like mad, but with voices drowned out by the rest of the crowd, so I didn't wind up seeing them until I curved around the finish line with a grin on my face.

I brought my oldest two kids back to the finish area for the final 45 minutes of cheering in the stragglers. I am most impressed by the fastest finishers who are the top athletes and by the slowest finishers who are doing something far beyond their normal abilities. The stand-out finishers were a woman who crossed with about five minutes to spare as her brother went absolutely insane on the side of the finish chute cheering for her, and then the last person to make it in time. We watched that person appear perhaps 100 yards in the distance as the clock showed one minute remaining. Would they be able to make it before the clock rolled over to the 17 hour limit? Indeed, with not even ten seconds to spare. I was glad my kids could feel this excitement, and I hope the inspiration it brought them will contribute to them having a life of athleticism.

imfinish.JPG
Perhaps I'll come back in another five years to race another one of these. But a big feeling from the day was that my interest in long races overall is waning. I've become more interested in the social prospects of endurance biking and running than in the rankings that come from racing hard -- and alone. I can imagine though that I won't be entirely satisfied with my performance at any Ironman, and that will always make the next race enticing. I wonder whether I would manage to sign up for long races with friends with the plan to stay together for the entire event. I have the Canyon de Chelly Ultra on my calendar for October, along with a couple friends. I think I would enjoy it more if I hung with one or even both of them for the duration of that race, but it's so tempting to push for one's best official result. Races lose their novelty. Friends never do. And it's for my friends who took an interest in my race that I wrote this report. Thanks to each of you!

PS: Here are number I pulled from the official results of this Saturday July 29 2017 race:

Registered: 2149
Finished: 1711
Started but DNF: 160
DQ probably started: 15
Didn't start DNS: 263

Hello Folks:

Widespread_ChiB_sm.jpgI'm now blogging about something new: WSP! You jam band dance freaks immediately think Widespread Panic, but no, I'm talking about IBM WebSphere Portal. After so many years of working with the familiar Oracle/BEA/Plumtree portal, it's stimulating to now work with the mysteries of a new product.

But mysteries are meant to be discovered. So here's one for today: how to trace database from WebSphere portal. And it seems appropriate to term this a mystery. When I asked two IBM consultants about help getting oriented to the schema, they both told me they never look directly in the database, with one emphatically saying in ten years he's never done it. Well, the time is right to unveil some new info.

The WSP database schema is harder to immediately understand than Plumtree's was, but the product comes with features to lay it out for you. Plumtree had PTSpy logging that would show you the exact queries and commands executed as the system marched along. With WSP, those who config and dig can find similar information in the trace.log file. Here's the process:

1. Turn on tracing. Navigate to the administrative area Portal Analysis > Enable Tracing, then add these trace strings:

com.ibm.wps.datastore.*=all:   
com.ibm.wps.services.datastore.*=all
set-logging.PNG

2. Perform the actions through the portal whose database queries and commands you're interested in. I recommend you submit some easily traced values. For this example, I created a page with the name "simplicity-name," description "simplicity-description," and so forth.

3. Search the log files for the commands you're interested in. Since the logs rollover quickly from trace.log into timestamped files like trace_15.04.23_20.15.47.log, you may want to search on rolled files too using a pattern like trace*log.

My first search was for this:

grep -3 simplicity ./wp_profile/logs/WebSphere_Portal/*   | grep -i replace

The results included this:

StatementTracer executeUpdate Replaced SQL: INSERT INTO 
customization.PAGE_INST_LOD (PAGE_INST_OID, LOCALE, TITLE, DESCRIPTION) 
VALUES 
(00004957101501150159803225B974D380CC(Z6_9QL0HA40L80I50A659E9N93GC6), 
en, simplicity-title, simplicity-description)
/opt/middleware/wp_profile/logs/WebSphere_Portal/trace_15.04.23_20.15.47.log:[4/23/15 20:15:46:022 PDT] 000000ec SQLStatementT 3 com.ibm.wps.datastore.impl.debug.SQLStatementTracer executeUpdate Replaced SQL: INSERT INTO customization.PAGE_INST_DD (PAGE_INST_OID, NAME, VALUE) VALUES (00004957101501150159803225B974D380CC(Z6_9QL0HA40L80I50A659E9N93GC6), com.ibm.portal.friendly.name, simplicity-friendly)

And from that I could tell my new page had the ID of 00004957101501150159803225B974D380CC. So with that info, I could search for all the SQL related to it:

grep -3 00004957101501150159803225B974D380CC ./wp_profile/logs/WebSphere_Portal/*   | grep -i replace

That gave me many lines. I wanted to know the summary of all the tables that were inserted to as my new page was created, so I searched just the uniq tables with insert statements, stripping out the exact commands:

$ grep -3 00004957101501150159803225B974D380CC ./wp_profile/logs/WebSphere_Portal/*  
 | grep -i replace | grep -i insert | sed s/".*INSERT"//g | sed s/"(.*"//g | sort | uniq
 INTO customization.COMP_INST
 INTO customization.COMP_INST_DD
 INTO customization.PAGE_INST
 INTO customization.PAGE_INST_DD
 INTO customization.PAGE_INST_LOD
 INTO customization.PAGE_INST_MAD
 INTO customization.PROT_RES
 INTO customization.UNIQUE_NAME

I hope that's enough to get you started. After getting the first glimpses of what actions use which tables, you'll then be able to start piecing together relationships between tables and queries that can answer questions for you. For example, here's one I put together to see the RELEASE database's various elements that make up Web Application Bridge portlet entries:

select * from outbound_config, outbound_mapping, outbound_policy, outbound_cookie_rule
where OUTBOUND_MAPPING.OUTBOUND_CONFIG_ID = OUTBOUND_CONFIG.OID
and OUTBOUND_POLICY.OUTBOUND_MAPPING_ID = OUTBOUND_MAPPING.OID
and OUTBOUND_COOKIE_RULE.OUTBOUND_POLICY_ID = OUTBOUND_POLICY.OID;

And what keeps you up at night?

Enjoy!
Hi Folks:

To celebrate the eve of my nation's Independence Day, I free myself from only querying across the data of a single Analytics asfact (Analytics Server fact) table. As a reminder, in most Analytics installations, the system archives events each month to put events from the prior month into its own table. This makes it harder to get a view across a period greater than what is in a particular table.

Consider first this query that tells me the users who have logged into the system in the past week and the number of their logins, with extra info about when the first and last of those logins were:

select distinct
u.name user_name, count (u.name) total_logins,
min(occurred) earliest_login_of_period, max(occurred) last_login
from asdim_users u inner join asfact_logins logins on u.userid = logins.userid
where occurred > GETDATE()-7
group by u.name
order by last_login desc


That's fine, but what if I want to get the past 45 days? Or 90 days? I can change the query so that instead of treating "login" as an alias of a single table, it instead becomes the name of a derived table, and I create that derived table by joining the current asfact_login with the appropriate number of archived asfact_login_X tables:

select distinct
u.name user_name, count (u.name) total_logins,
min(occurred) earliest_login_of_period, max(occurred) last_login
from asdim_users u left join (
select occurred,userid from asfact_logins
union
select occurred, userid from asfact_logins_2014_05
union
select occurred, userid from asfact_logins_2014_04
) logins on u.userid = logins.userid
where occurred > GETDATE()-999
group by u.name
order by last_login desc


I don't know whether you're excited about that, but from my perspective:

fireworks.jpg

Monitor your network reliability

| | Comments (0)
colbert.jpgMy network has been unreliable. Many weeks of inconvenience didn't push me to solve it, but you can bet I did once my wife told me The Colbert Report wasn't streaming well. I turned to a script to help me. You may find it helpful too.

If you launch this script on a machine, it will ping in batches of 200 and writes the results to a log file. You can then scan the results to know how your network performs over time.

@echo off

@REM =============== CONFIG BEGIN ================

@REM == set the size of the ping batches
set loopsize=200
@REM == set the destination for ping
set pingdest=www.google.com

@REM =============== CONFIG END ================

@REM == set up the log file directory
set logdir=%~dp0\ping.logs
if not exist %logdir% mkdir %logdir%
@REM == set the current log file
for /f "tokens=1-9 delims=/:. " %%d in ("%date% %time%") do set stamp=%%g-%%e-%%f_%%h-%%i-%%j
set log=%logdir%\pinglog_%stamp%.log


:loop
echo === starting %loopsize% pings on %date% at %time%
echo === starting %loopsize% pings on %date% at %time% >> %log%
netsh wlan show interfaces | grep SSID | grep -v BSSID
netsh wlan show interfaces | grep SSID | grep -v BSSID >> %log%
ping -n %loopsize% %pingdest%  >> %log%
goto loop
@echo on

In my case, I noticed that video streaming was glitchy, and VOIP telephone calls and video conferences would sometimes lose data. The script reported that batches of pings almost always had 2% losses and frequently as high as 5% losses. My network setup looked like this:

internet provider - modem - netgear wifi router - computers

Was the problem my inexpensive Internet package? My used modem from ebay? My old router? I changed my system to be this:

internet provider - modem - dlink wifi router - netgear wifi router - computers

Then I connected one machine to the dlink radio and another to the netgear radio. I watched network performance on each. The dlink radio performed great, but the netgear radio still had high ping loss. I had identified that the netgear wifi router was the problem.

So I reverted a firmware upgrade that I had put in a couple months ago, connected again to the netgear radio, and now its ping losses are far less than one in a thousand. This script helped me diagnose and solve the problem -- so I don't have to buy a new router and reconfigure the house.

By the way, I use this ugly command to extract just the lines I need to see: the name of the radio I'm connected to and the results per ping batch. This works because I've downloaded egrep and sed (among other unix-style utilities) and added them to my path:

egrep "SSID|loss" C:\ping.logs\pinglog_2014-01-24_9-46-32.log | sed s/".*("//g | sed s/").*"//g | sed s/".*\:"//g

The result looks like this:

0% loss
 netgear radio
0% loss
 netgear radio
0% loss
 netgear radio



Configure Eclipse TFS Plugin for SSL Repos

| | Comments (0)
Java and Microsoft integrations? Edge cases aren't always fully documented.

Today I tried setting up my Eclipse IDE with a Microsoft Team Foundation Server (TFS). The TFS plugin is here. My organization's TFS in on a host with SSL, and we use our own certificate authority for the certificate. When I tried to connect to TFS, I got this error:

SunCertPathBuilderException.jpgsun.security.validator.validatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

With SSL, the client (in this case Eclipse) needs to know whether to trust the certificate on the server. Usually, servers use certificates from Verisign, Thawte, and the like. Java ships with a file that describes those commercial certificate authorities that can be trusted. But if you're using a certificate or a CA that isn't recognized, then Java and by extension Eclipse won't trust the certificate. But you can tell Java to trust the certificates or CAs you need. There are at least three ways to do this.

First, I found an IBM webpage that described how to start Eclipse with an extra trust store. This worked, but it seems like unnecessary overhead to create a new trust store and to launch the application with extra arguments every time. The page is here. This creates a custom keystore:

C:\>%JAVA_HOME%\bin\keytool.exe  -import -alias tfsrepo.mydomain.com -file c:\temp\tfsrepo.mydomain.der -keystore mycustom.keystore -storepass password

And this starts Eclipse using that keystore:

C:\eclipse\eclipse.exe -vmargs -Djavax.net.ssl.trustStore="%JAVA_HOME%\bin\mycustom.keystore" -Djavax.net.ssl.trustStorePassword=password

Second, I realized that it would be simpler to have the Java install behind Eclipse include my TFS server's certificate in its default trust store. So I downloaded the SSL certificate for my TFS server and added it to my cacerts file:

C:\>%JAVA_HOME%\bin\keytool.exe  -import -alias tfsrepo.mydomain.com -file c:\temp\tfsrepo.mydomain.der -keystore %JAVA_HOME%\lib\security\cacerts -storepass changeit

Third, I realized that I ought to just import the certificate for my organization's top-level CA instead of using the TFS cert. The top-level CA's certificate lasts many years longer than the one on the TFS server, and if I trust the top-level CA then my Eclipse install will trust all other systems with certificates from our CA. So I first deleted the TFS certificate:

%JAVA_HOME%\bin\keytool.exe  -delete -alias tfsrepo.mydomain.com  -keystore %JAVA_HOME%\lib\security\cacerts -storepass changeit

Then I imported the top-level CA's certificate:

C:\>%JAVA_HOME%\bin\keytool.exe  -import -alias topCA.mydomain.com -file c:\temp\topCA.mydomain.der -keystore %JAVA_HOME%\lib\security\cacerts -storepass changeit


And now I can launch Eclipse (c:\eclipse\eclipse.exe) and connect to my TFS system (File->Import->Team->Projects for Team Foundation Server->Servers->Add->tfsrepo.mydomain.com).

To download a certificate, use your web browser to visit the URL of the system. Use the browser's feature to look at the certificate. cert-firefox.jpgIn Firefox, I click the lock icon in front of the URL, then click "More Information...," then click "View Certificate," then "Details." At this point, I have an Export button to save this certificate to a file, or I can use the certificate hierarchy to select the top-level certificate from the CA and save it.

cert-viewer.jpg

And if you want to see which certificates your Java system trusts, you can list what's in the cacerts file with this:

%JAVA_HOME%\bin\keytool.exe  -list -keystore %JAVA_HOME%\lib\security\cacerts -storepass changeit


WCI Recurring Jobs Don't Party in 2013

| | Comments (0)
post-party-mess.jpgHappy New Year!

Now that the holiday parties are over, we get to deal with the mess that comes so often in technology when calendars turn over. The mess I found myself facing this morning is due to a "feature" of WCI, so you may have it too.

Recurring jobs are set to run on an interval that has an end date. The portal UI defaults to an end date of Jan 1, 2013. Any pre-existing job that was set to run periodically and to use the default end date is no longer scheduled to run. This includes many crawlers, syncs, maintenance jobs, and so forth. Any new job set to run on a recurring basis defaults to Jan 1 2013 which since it's in the past will cause the job to run once but never again.

You can query the portal database to get a list of jobs that [1] ran as recently as December and [2] aren't scheduled to run again. This is the list of likely candidates that would need to be rescheduled. Also, the query gives URL suffixes to let you easily create links to open the job editors. In your case, you may want to put the full URL to your admin portal in there. In my case, I used this query for many systems with different prefixes, so I kept it generic. Here's the query I used:

SELECT j.[OBJECTID],j.[NAME]
      ,u.NAME 'owner name'
      ,j.[CREATED],j.[LASTMODIFIED],j.[LASTMODIFIEDBY],[LASTRUNTIME],[NEXTRUNTIME]
      ,'server.pt?open=256&objID=' + cast(j.objectid as varchar(9)) 'admin editor page suffix'
  FROM [PTJOBS] j inner join PTUSERS u on j.OWNERID=u.OBJECTID
  where NEXTRUNTIME is null and LASTRUNTIME > '12/1/2012'
  order by [owner name]

Enjoy!




Update: This is now BUG:15947880 and KB Article:1516806.1.


Marketers goes where the eyeballs are. We look at "social" media, so how can we be surprised by the invasion? But I'm still disappointed when I encounter blatant gaming of the channels that supposedly are trusted sources of information from "people" like us.

L7sWA[1].jpgLet me introduce you to Anne Waterhouse. She lives in New York, and judging by her photo, she's a lovely mix of saucy and innocent (who isn't drawn by that?), and she is of modest means (just like the rest of us!). She tweets as @annewaterhouse about things that interest her nearly 2000 followers. It's good content. Funny posts from Failblog, smart content from Alltop, tech goodness from Mashable, and helpful tips from Lifehacker.

The problem is she isn't real.

I met Anne because yesterday she tweeted about a short video that my cycling buddy made. She picked it up from thought-leader Guy Kawasaki's blog. "How to make better presentations in 2:53 " she said.

Cool! Kawasaki likes Marc's video, and someone shared it on Twitter. When I looked at her Twitter page, I was surprised by the pace at which she posted. How could she consume so much web content? Who was she? Her profile revealed little. I scanned the timestamps, and I realized she had posted in each of the preceding 24 hours. Ah, she's superhuman and doesn't need sleep?

The constantly changing Twitter API now allows you to access the latest 200 tweets from a person using URLs like this: https://api.twitter.com/1/statuses/user_timeline.rss?screen_name=annewaterhouse&count=200. I grabbed her tweets and extracted the timestamps using this:

grep -i pubdate tweets.xml | sed s/".*<pubDate>"//g | sed s/"<\/pubDate>"//g > timestamps.xml

I then brought those into Excel and made histograms by day, then combined them to show all three days overlapping. This amazing woman tweeted 200 times in the past 37 hours. Check it out (or download spreadsheet):

annewaterhouse.tweets.jpg
Okay, fine, she has some automated tool that retweets the RSS feeds from her favorite sites. Some of her tweets like this one are generated by TwitterFeed.com. That doesn't mean she's not real. Is a real person behind these? Maybe she talks with her friends too? Well, no. In these 200 tweets, I used grep -v to filter out messages that didn't include a link as would come from the RSS feeds, and there was nothing left. I filtered for "@" mentions of other users and found none.

Who set this up? Is this the creation of one of the websites that she links back to? Are they trying to drive their own traffic? The idea is clearly a good one, based on the thousands of people following these garbage tweets. Is there a marketer/exploiter out there who discreetly sells this to websites? "Give me $500 for your own @annewaterhouse. I guarantee she'll share interesting content and garner a following, and this will drive traffic to your site." Does that exploiter use the metrics available from URL shortener sites that generate the links to then charge its customers advertising? "Give me two cents per click into your site." Do they go to the trouble of using so many URL shortener services to make it look less automated?

And to think my friend and I were pleased that she shared a link to his video. Well, I guarantee Guy Kawasaki is real, and he's the one whose opinion matters. Now let's stop thinking about social media and learn how to give better presentations when we're dealing with real people:


How To: Bulk download from Sharepoint

| | Comments (0)
This post goes in the "Why wasn't I able to Google that?" category. Remember this old comic?

100322_cartoon_6_a14837_p4651[1].gif
It turns out things that seem like they should be easily Googleable aren't. Maybe this post will be helpful to someone else out there.

I've been helping a small business migrate off Sharepoint and onto a local NAS device (Dlink DNS-323). They have about 4000 documents in 300 folders on Sharepoint hosted by Microsoft Online. How to do a bulk download? The Sharepoint UI (that they hated so much that they asked me if I could migrate them off it) gives no clues. I did searches for these without turning up anything good:

sharepoint "bulk download" "microsoft online"
bulk download sharepoint.microsoftonline.com
sharepoint server 2007 bulk download

Many people offer bulk upload tools, but what about bulk download? Certainly people want to change technologies now and again. I saw one discussion thread that vaguely mentioned WebDAV, but I found nothing about it in the online help, and I found very little for this Google search:

 "microsoft online services" webdav sharepoint

Finally I just gave it my best shot. If it does support WebDAV, which the web hasn't confirmed for me, then how would I go about it?

network-places_smallco.jpgMy laptop runs XP (still my favorite environment), so I used these steps:

  1. My Network Places
  2. Add Network Place
  3. Next
  4. Choose another network location
  5. Enter URL to Sharepoint site: https://smallcomicrosoftonlinecom-2.sharepoint.microsoftonline.com/Shared%20Documents/
  6. Give same credentials used to log into Sharepoint

The laptop of the person for whom I was doing this migration runs Vista, so the process is a little different there. I connected them using notes from a forum:

  1. Hit start menu and go to "Network"
  2. Hit Alt-button to get the tools-menu.
  3. Go to Tools -> Map Network Drive
  4. Click on the link on the bottom that says "Connect to a website that you can use to store your documents and pictures"
  5. Hit Next
  6. Choose "Choose a custom network location" and hit Next.
  7. Enter your url location...

Well, the instructions from here weren't a perfect fit, but on Vista, basically, enter the URL to Sharepoint, then the credentials, then maybe select the drive this will map to.

After connecting by WebDAV, I was able to access the entire Sharepoint site as a folder in Windows Explorer, and I could then open that folder and copy its entire contents to a local disk drive. I brought it to the laptop first, then I copied it onto the NAS drive.

At this point, the folks I'm helping out were able to disable Sharepoint logins for everyone but the administrator. They'll let their Sharepoint subscription lapse at the end of the month, which they're very happy about.

By the way, the DNS-323 that I wrote about and to which I gave a glowing review? It has extraordinarily frustrating user and group management through the web interface. Users can only belong to a single group. So if you want to have a group for managers who would access the /managers share, then a group for accounting who would access the /accounting share? You can't do it with the standard UI. As soon as you add the manager Adam to the accounting group, he is taken out of the manager group. I wound up having to telnet into the server to edit the undocumented, non-standard config file (/mnt/HD_a4/.systemfile/.smb.ses). I don't recommend this NAS for use outside the home.
spy-display.jpgDo you work with people who need to analyze PTspy logs on their desktop but who don't have the Spy reader available to get those logs into an easy-to-read format?

Back in the day, BEA put out an installer called LoggingUtilities_PTspy with the executable file  ALILoggingUtilities_v1-1_MP1.exe. If you still can find that installer, you can use it to install the Spy reader. The format of .spy logs hasn't changed, so that old reader works for the latest and greatest (or worst) logs.

But that installer was only for 32-bit machines. If you're working with Windows 7, then you need another approach. My recommendation is that you use the regular (and unfortunately huge) component installer, install something that includes the Spy reader, then delete the components you didn't want. The steps I used to do so follow.

Run the WebCenterInteraction_10.3.3.0.0.exe installer. At the prompts, enter the following:

--

Installation folder: (your choice. i'm choosing c:\apps\plumtree).

Choose components: Check ONLY Automation Service.

If you get a Dependency Warning about Microsoft Visual C++, then "Yes, launch the installer."

Configuration Manager - Port and Password: Accept the default port of 12345 and leave the password blank.

Password inconsistency: Click "Continue" to keep the blank password.

Pre-Installation Summary: Click install.

Launch Configuration Manager: Just click next.

Application Settings Confirmation: Select "No, configure later," then click next.

Install Complete: Select "No, I will restart my system myself," then click done.

--

PTSpy is now available on your machine. You don't need to reboot.

run-cmd-as-admin.jpgHowever, your computer also has three services installed that you probably don't want. To remove them, you need to run commands in a command prompt that runs with elevated administrator privileges. To get that command prompt, click the start button and type "cmd" into the search box. You'll see cmd.exe is one of the search results.
Right click on it, then select "run as administrator."

cmd-paste.jpgNow in that prompt, paste in the following commands (to paste, right-click the title bar, click edit, click paste). You can paste these all in at the same time:

@rem -- make sure all services are stopped
sc stop "oracle wci logger"
sc stop ConfigurationManager12345
sc stop ptautomationserver

@rem -- now delete them
sc delete "oracle wci logger"
sc delete ConfigurationManager12345
sc delete ptautomationserver


That should do it. You should see output like this:

C:\Windows\system32>sc delete "oracle wci logger"
[SC] DeleteService SUCCESS

C:\Windows\system32>sc delete ConfigurationManager12345
[SC] DeleteService SUCCESS

C:\Windows\system32>sc delete ptautomationserver
[SC] DeleteService SUCCESS

The install put just over 800mb of files on your machine, but most of those are not related to ptspy. You can delete about 600mb of these by deleting unnecessary files and folders.

Open the folder C:\apps\plumtree\common and delete these:

container
icu
inxight
outsidein
pthreads
wrapper

Then open the folder C:\apps\plumtree and delete these:

configmgr
descriptors
jre142
jre160
ptportal
uninstall

Now on to analyzing spy files!

What Oracle engineering should do though is put an option in the WCI installer for just the Spy logging toolkit (it won't be in WCI 10.3.3). Maybe some day...

Enjoy!


ptdell.jpgUniverse: I am resigning from Oracle.

I know the universe of interested parties shrinks every year as the sales of the WCI portal (née Plumtree) decline, Oracle promotes a different product, and old customers move on to new platforms. But! Some of you are still out there reading, and so thanks!

Fortunately for you all, I'm not going far. I'll continue working with the WCI portal for a long-time customer, Boeing, for whom I've consulted off and on, but mostly on, since 2004. So the blog entries will continue to sporadically pop into your RSS feeds.

I have three company laptops that I need to return. The newest one Oracle issued to me several months ago, and I'm sure it will be redeployed to another employee. The older ones, however, will likely be "decommissioned." Occasionally I read stories about crooks who buy old hard drives to recover their data and then engage in all sorts of nefarious crimes. I don't want my data open to that risk. Since I don't know exactly what Oracle's decommissioning process is, and since any company's processes may not be perfectly followed, I decided to take extra care to destroy the personal, customer, and corporate data that had been on the hard drives.

So here's what I'm doing tonight, and you probably should do something similar when you let go of your old laptops, whether you're disposing of an old personal machine or resigning from the job that had run its course:

  1. buddha-baby.jpgCopy any needed data off the old laptop (e.g. this photo from when kiddo was a newborn)
  2. Create a "live cd" or a bootable disk with a *nix operating system on it. I used Ubuntu (get it).
  3. Boot your old laptop from the CD. On my Dell laptop, I used F12 to get a one-time boot menu to select that I wanted to boot from CD rather than from the hard drive.
  4. Identify the partition name for your disk. I did this by going to System -> Administration -> GParted Partition Editor.
  5. Open a console.
  6. Type a command like this one at the prompt, where /dev/sda2 is my laptop partition to wipe:

    sudo shred -vfz -n 1 /dev/sda2

  7. Wait while the machine overwrites your entire disk first with random data, then with zeros.

That's it. There's not much left to find on the drive. This is a much better approach than just reformatting the drive, because reformatting merely clears the address tables for the disk but still leaves the data intact and retrievable by Dr. Evil who makes his business doing such things. Of course, you could be more fastidious than I was. Another blog gives a more detailed review of the technical issue and even more thorough ways to knock it out.

After erasing the data, I went the extra mile to installed Ubuntu. This way anyone who turns on the computer will be able to log in and see that nothing is readily available, and they'll also find it to be a generally useful machine.

Enjoy.

PS: Yes, I'm extraordinarily happy to move on from Oracle!

Find recent content on the main index or look in the archives to find all content.