Once again, I decided to use the ‘Upgrade Automatically’ option, and once again it failed me. This time, it was going to WordPress 3.0. And then updating a theme afterwards.

It looks like everything succeeded (as far as I can tell) except for the last step. I upgraded to 3.0 with no incident, but then I had some goofy theme that I should probably just remove, and it bit me in the backside. It turns out that WordPress now has an automatic ‘maintenance mode’ which is enabled by dropping a file called .maintenance into the WordPress root. With this file, no one can do anything on the site (including get into the admin site).

Thankfully, Google saved the day, but, man . . . what a nuisance. I had to log into the web site file management system, and then delete that .maintenance file.  Not a big deal, I know, but it shouldn’t work like this.  I should be able to do it from the WordPress Dashboard.  And, yes, I should have read the release notes and such. But who does that? Seriously? I do it for a living, and the last thing I need is for my personal software.

I really wish more people read About Face by Alan Cooper.

{ Comments on this entry are closed }

I was working with a client today that wants to rework what their URL looks like, and actually try to put in session data into it.  The reasons for this vary, and are irrelevant, but suffice to say this was a critical piece of functionality for their site.  Initially, I said “absolutely – Webthority can do this!” knowing that I’ve used Webthority’s URL rewriting capabilities in the past.

However, as we put everything into place, nothing happened.  And it turned out that the URL re-writing that they wanted (which is to put in the session id into the URL) wasn’t available until the user authenticated.  That sort of ‘URL mangling’ only happened when an authentication agent was used!  I’ve never used Webthority without one, which makes sense since it often used for Web SSO, and you always want to authenticate, right?

In any case, after a lot of stumbling and bumbling around, Paul H clued me into how the Custom Authentication Agent was used.  The documentation is pretty scant on it, so I created this 2.5 minute video outlining the changes I had to make in order to get the user to automatically authenticate and establish a session.

This is where I came up with the term ‘promiscuous authenticator.’  In a perfect world, this would be another option, just like LDAP or Database.  But for the time being, this will work.


Hopefully, this will help others that are looking to configure Webthority’s Custom Authentication Agent.

{ Comments on this entry are closed }

I’ve now had a beta build of VAS 4.0 for a bit, and have finally gotten around to recording some videos featuring some of the new additions.  For core VAS functionality, this blog post here still has a lot of relevant videos.  None of that functionality is going away.  However, there are a lot of new things in 4.0, so here are some starter videos.  I’ll try to post some more, time permitting, but I’ve given up on Camtasia for Mac, so it may take a while.

As with the VAS 3.5 videos, there’s no audio, so you have to use your imagination.  And the second video is quite lengthy, even with some heavy editing to speed things up.  This is simply because the copies of the VAS binaries (all of them) to the server takes a bit of time.  Other than that second video, all the others are under 3.5 minutes.  Enjoy.


{ Comments on this entry are closed }

I posted the following in an entry quite some time ago, but thought it made sense to break out just the VAS ones into a separate post for easier searching.  And so I can reference it in the VAS 4.0 blog post I’m about to put up after this one.

All of the following videos are 1-3 minutes in length, with no audio.  They show some of the core VAS functionality which is found across the board on all operating systems supported by VAS:

If you happen to have NIS running in your environment, you’ll want to have a look at the next set of videos that target NIS maps, and how VAS brings them directly out of AD and onto your *nix hosts:


For a nice, complete 18 minute long NIS migration video (with audio!!!!) here is one that I recorded for a particular customer:

Here are some additional random VAS videos that I’ve recorded that are good to keep together.  People often have questions on what the VAS install looks like on the mac – here are 2 videos of that:

Lastly, here is VAS’ self-enrollment feature on Solaris 10:

{ Comments on this entry are closed }

I just finished off a presentation at the Microsoft office in Atlanta, and told people that I would post up the slides I used during the presentation. You can find those here:

After me, Aaron Nelson presented, focusing on Server Management using PowerShell.  You can find his blog up here:

And, finally, Berry Gerdsen is currently presenting the AD cmdlets. Once he’s done, his slides will be found here:

{ Comments on this entry are closed }

In ARS 6.5, we really beefed up the end-user self-service functionality. It is really, really slick, and people that see it really like what it offers their end users. I’ve made numerous recordings of it, but none really come out to a satisfactory result. And it has nothing to do with our product.

This is because I’ve been struggling with converting from Camtasia for Windows to Camtasia for Mac. The two products are miles apart, and its the last bit of my ‘toolkit’ that I need to re-learn on the Mac. And, of course, the 2 products do not even produce videos in the same format, so I’ve had to re-record everything, and learn all the new controls.

With all of that, if the video is not to your liking, there’s not much I can do right now. It’s late, and I have a deadline, so here is a 6+ minute video of the Self Service Manager functionality (note that there is no audio but some text I’ve put in to highlight what is happen whilst I re-learn Camtasia):


As always, comments and feedback is always appreciated.

{ Comments on this entry are closed }

I’m at a Residence Inn in Boston today, and just perused someone’s voicemail messages. I don’t need to say who’s messages they were, or their content but it was rather interesting. How did I do this? Well, nothing too hackerish or illegal. I actually listened to messages that someone decided to share out there.

This poor fool installed iTunes and let it be the default mp3 player on his machine.  He also didn’t change his preferences to require a password, or not share his music.  Which means that anyone on the same network as him, with iTunes running, can browse and listen to his whole library.

Where does the voicemail come in?  It looks like he’s using one of those fancy services that sends you your voicemail message as an mp3 file.  Or perhaps his corporate PBX does it.  In any case, he downloads his voicemails on his computer, listens to them, and then they remain in his iTunes library.

And when he plugs into the network at the hotel, everyone with iTunes can see anything he’s got in his iTunes library.  There was an interesting message from a company that wants to partner with his company that was 53 seconds long titled ‘20101251614511956509.’  Given that its now May, I wonder if that partnership took place.

I should look it up . . . in the mean time, make sure you’re machine isn’t sharing anything you don’t know about.  I love Apple but they’re a little too ‘consumer friendly’ and definitely give IT departments a huge headache with things like this.

BTW – have you considered using Quest’s VAS to deploy out group policies to your employees’ Macs to stop them from using iTunes?  You should!

{ Comments on this entry are closed }

Some time in Q3 of last year, Stu Harrison (the PM for Defender) got me a beta copy of the latest version of Defender, which was due to have GridSure in it.  Of course, I took the time to record a quick demo of it, but then Stu asked me to delay releasing it.  One thing led to another, and I never got to posting the demo up here.  Today, however, while going through Defender with another architect, I remembered that I had this recording.

Before I go any further, you may be asking, “what is GridSure?”  It is another type of token that is available with Defender, and you can see a 3 minute marketing demo of it in the next URL.  This recording does a good job of explaining how it is used by the end user:


Whilst the demo above gives you a good idea of what the end user will see, I recorded a demo showing how to configure the token, and policy and what the user does to register.  In addition, I show the standard Defender desktop token being used with the ISAPI filter at the very beginning of the video.  I’ll apologize now for the microphone settings, and without further ado, here’s the 3 minute, 20 second video:



{ Comments on this entry are closed }

I got a call yesterday from a colleague looking for help in importing a text file. For those that don’t know, I do quite a bit of work with a Quest product called ActiveRoles Server, and an add-on called Quick Connect. Quick Connect (QC from now on) is pretty slick, and has come a long way in the last 2 years. However, it can’t do everything quite yet.

The file that was to be imported was a fixed width file, meaning every line was the same width, as was every field. Which meant fields were padded with spaces. Unfortunately, QC cannot import those files out of the box today.

The initial thought was to write a pre-sync script that would alter the file and have it put the file into a usable format. However, that’s a lot of work, and a lot of scripting. And with someone on-site, may take a while with the added pressure.

So I made the following suggestion (note: SSIS is SQL Server Integration Services – the successor to SQL Server Data Transformation Services):
SSIS lets you take delimited or fixed width files and do whatever you want to them. I suggest you watch this:

Then create a package to do what you want (all wizard and gui driven – very easy). Once the package is created, and executing properly, take a look at the following command line command:

This will let you execute the package, which you can use in a pre-sync step. Also, SSIS can do any format to any format – you don’t have to have SQL Server (or any DB) as either end point. So you could actually use this to convert a fixed width (or ragged right) file to a CSV file that QC can consume.

Why do I think this is a better solution over a pre-sync step? Well, there are several of reasons, and some may disagree, but I’ll put them out there anyway:

  1. Maintaining scripts is a pain – that pre-sync step will need someone “script-capable” to alter it when the file format changes (and it will – it always does)
  2. GUIs are easier – the script is just that, while SSIS provides a nice GUI package editor – it’s simply a better tool and is less prone to allowing mistakes
  3. Debugging – QC doesn’t have any debugger in it – so you try the job, and it fails, and you re-code. With SSIS, you get debugging, and you can isolate the transformation so you don’t have to run the whole QC job just to see if the file is being built correctly
  4. SQL Server infrastructure – using a package around the transformation lets you add many, many other things to it – notifications, other data sources, and other options – you basically have all of SQL Server at your disposal. And the coolest feature is DTSRun which is a command line tool to run the package unattended. I’m all about the command line when possible.
  5. Finally – Speed – I’ve found that DTS (now SSIS) is super fast. Its sole lot in life is moving data around. And while QC is quick, too, the SQL Server team has been working on SSIS for years, and is all about ‘speeds and feeds.’ In the past, I’ve had DTS packages that would process over 50 million rows of data – that is some serious data movement, and given that the file needs to get run twice (once to transform and once to load through QC), you might as well cut time down anywhere you can

So there you have it. Will Quick Connect have this functionality in the future? I can’t say for sure, but we’ve got a bunch of smart people working on the product. Why do we not have it already? We can put every conceivable feature into the product, but then we’d never ship it. What do you need to do to get it? Well, you need to let someone know. This is the first time I’ve encountered a fixed width file in 2 years. If its rare, then it will go to the bottom of the feature list. But if lots of people request it, it will rise to the top. And how do you let someone know? You contact someone at Quest (myself, the Product Manager, your sales rep, or even support) and ask for an enhancement request. Its that simple – and it all gets back to our PM who tracks this stuff.

Enjoy the video, and let me know if it helps.



{ Comments on this entry are closed }

Some folks have asked why there’s been no post up for over a month, and it had everything to do with WordPress.

After my last post, I upgraded to WordPress 2.9 – and that’s when all the trouble started.  Basically, I couldn’t log in.  I could get to the database, and I could get to the file system, but actually logging in gave me a message that I didn’t have rights to log in.  So my password was good, but everything else was a no-go.  I had been using Fantastico, which is great until it isn’t.  Fantastico is basically a management tool that automates deploying things like drupal, WordPress and other web apps out that my hosting provider uses.  It makes new app deployments much easier than even the built-in ones.  For example, with WordPress, it will create the DB, and set up your first user.

But I think its the fact that I deployed with Fantastico, and then upgraded inside WordPress that caused these problems.  And after lots of googling, searching, and experimenting, I gave up.  I actually needed to move the site to the main address (www.idmwizard.com instead of blog.idmwizard.com) for a while, and this gave me a swift kick in the backside.  Of course,

For those of you stuck with an upgrade, here’s what I did.  First, I deployed a very, very bare bones WordPress to the core site.  Which means I deleted everything in the root directory (except for .htaccess) and then put the default WordPress out there.  I then created the database, and edited wp-config.php.  Once that was done, I went through the default install (found at /wp-admin/install.php) and specified my new admin user.  Once WordPress did its thing, I came through, copied my original files (from the blog.idmwizard.com site) over the ones in the root for www.

Now, here comes the tricky steps – I dropped all but 2 tables in the wordpress database.  The tables I left were wp_users and wp_usermeta.  This would let me log in but then I could bring everything else over.  Next, I exported out my old database, and then edited the SQL file.  The edits I made were:

  1. swap the order of the inserts for wp_users and wp_usermeta.  These two tables are linked, and trying to insert into wp_usermeta before the users themselves were updated would cause errors.
  2. Remove any entries for user #1 in both tables – this was the main user, and the one I could not log in with, so I wanted to make sure it didn’t get screwed up again.

After that, I imported the database into its new place, and that was it.  It looks like we’re back to where we started (almost).  There are still things that are screwed up – I see my tags are missing from all my posts, as well as the categories.  I’ll need to sort that out later.  But the core site works, and I should probably re-tag everything anyway.

That is all . . . glad I finally got this done, and I’m going to bed . . .

{ Comments on this entry are closed }