Right on the heels of my last post (here, on a sister site) about the various smart cards, I get an email today that includes the following:

“CAC and Defender are both two factor authentication methods. They can be combined to give you three factor but I haven’t seen anyone do that. CAC uses the DoD PKI structure and Defender uses RADIUS to AD”

So I had to reply with the following:

QAS supports smartcards (and has for years now, including CAC) but doesn’t use Defender for this. Let’s back up and answer exactly what QAS and Defender do.

QAS provides AD integration to Unix/Linux/Mac systems. Defender provides RADIUS authentication using AD as it’s directory. Smartcards (like CAC and PIV) use PKI, not RADIUS, to authenticate the user.

The only time Defender gets involved with smartcards is if the card has a token (not a certificate) on it, in which case, it then provides authentication using that token. There are cards out there that are hybrids, and allow for both tokens and certificates. In that case, Defender only uses the token portion and ignores the certificates.

Now, if someone wants CAC support for QAS, you need to look for the QAS smartcard module, and the related license. To install it, the QAS ISO has a smart card install & admin guide, and you would look for the vassc package to deploy to your particular system. We currently support Red Hat (Linux), Solaris and Mac with the smart card modules.

The other thing that needs to be noted is that nothing that Quest provides can accommodate 3 factor authentication. At least, not on it’s own. As a quick review, the 3 factors to authenticate are:

  • Something you know (password, key phrase, hint, account number, username, etc)
  • Something you have (a key, a token, a certificate, etc)
  • Something you are (biometrics – fingerprint, retinal scan, voice print, etc)

Having multiple instances in the same category (a username, a password, and an account number, for example), does not constitute multiple factors. Now, QAS, Defender, ESSO and other Quest products can all co-exist with other authentication systems, but out of the box, you can get 2-factor authentication from us in a variety of ways, not three.

(edited 2011-11-09 to include link to federalcto.com post referenced in the first sentence)

{ Comments on this entry are closed }

It’s evident throughout history – inside jobs. Aside from nuclear war and weapons of mass destruction, cyber attacks pose the single greatest threat to US security – and they are growing more and more difficult to prevent. One clear indicator of the threat is the sheer volume of breaches. Cyber attacks on federal computer systems have increased more than 250% over the last two years, according to the Homeland Security Department. Federal computing resources are under constant threats — not only from the outside, but also from trusted partners and internal users. Cyber attacks are a clear and present danger and the potential for both accidental and deliberate breaches of sensitive information is a growing concern. Innocent but careless employee actions can set the table for attacks by more malicious parties. In many cases, the threats are inadvertent, with users unwittingly introducing harmful viruses to your agency or allowing sensitive data to be leaked.  But whether or not there’s malice, the damage from breaches can be great.

Join me for a discussion on Monday, March 29 @ 1:30 pm ET on ways to protect your environment from the inside threat.  We’ll talk about how you can not only improve your security posture, but also meet regulatory and statutory guidelines during audits and reviews.  Plus, you’ll also learn about forensics and tools you’ll need when a breach does occur to minimize the losses and downtime.

You can register here. I’m looking forward to hearty discussion.

{ Comments on this entry are closed }

I recently got asked to show how someone could use Quest’s ActiveRole Server to temporarily grant access to a CD drive, or USB storage device to a select set of users.  I knew it could be done, and didn’t think it would take too long to demonstrate.  However, I’m now on my 3rd day of devoting some time to this, and it’s turning out to be a tad bit more difficult than I thought.  The problems are mostly with the logistics, and configurations, as you’ll see if you continue reading.

The first problem was that I was using VMs (virtual machines), and the USB and CD-ROM are virtualized.  That made me nervous about making sure that it will actually work ‘as advertised.’ So I went and got a Windows 7 laptop, joined to my lab domain, to convince myself that what I was doing would work in the ‘real world’ since we’re talking about desktops here.  The short version – it does, indeed, work in both cases.

After that, I had to find the specific setting.  It turns out there is a lot of information out there, including a few KBs from Microsoft themselves, but nothing really summarizing all the gotchas.  So here is my list, assuming this is all done with native tools and without a COTS (Commercial Off-The-Shelf) product:

  1. The only reliable way to block the CD-ROM or USB drive on a large number of machines is through an ADM template that disables access by the system itself to a critical driver.
  2. That access will be blocked for all users on the machine; there is no way to fine grainly select which users can use which devices on a given machine.  The GPO is applied to the computer object, not the user object.
  3. The ADM template uses double-negatives.  You ‘Enable’ the ability to set the setting and then set it to ‘Disabled’ to turn off the specific drive.  I’ll explain with a short video below.
  4. The ADM template will ‘tattoo’ the machines it is applied to. Tattoos are permanent and so is this setting.  Which means that the setting will persist on the machine, even if the GPO is removed/deleted.  It also means that if you apply a setting, you will have to apply another GPO to explicitly reverse the setting.  You’ll see this mentioned by Microsoft as a ‘preference’ rather a policy in their link below.
  5. For those of you that do not know, GPOs are not instantaneous.  You do not edit a setting, run to a machine, and see the results right away.  Machines actually PULL settings down, and Active Directory DOES NOT push them by default.  This can be overcome, of course, but the default behaviour is the pull.
  6. Because of the pull, and several other factors, it can take minutes and possibly hours to get a setting to a machine.  In the case of hours, it may be that you have to wait for replication to occur from the server where the GPO was edited to the server (domain controller) that your computer is working with.

With all those constraints, I set out to put together the recordings below showing how it can be done.  So what I ultimately have is a group where a machine is added and removed as needed to have these settings applied.  Again, the settings, once applied, cannot be removed, but can be toggled from ‘enabled’ to ‘disabled’ and vice versa.

I got my adm template together, and went ahead and imported it in.  The template I used can be downloaded here (http://www.idmwizard.com/quest/wb/block_drives.zip).  However, after I imported, I found I couldn’t edit it in GPO Editor.  Specifically, I couldn’t see the settings I needed to edit. So with some more searching, I discovering that I had to disable some filters in the view.  Here is a video where I do all this, starting with the text of the ADM copied and pasted into Wordpad:


Next, I actually looked at how computers could be added to groups in 2 different ways.  The easiest way is through regular group membership.  So in this video, I will simply show a computer getting added and removed from a group.  The difference from native tools is where ActiveRoles Server comes in.  You will see in the video that I can select a machine to be added temporarily.  I can set the addition, and the removal into the future, allowing me to only have the membership be active for a limited amount of time:


Another option, though, is through a dynamic group.  Dynamic groups are also an ARS feature which allows you to construct a query-based group.  The cool thing in this next video is that I also use a Virtual Attribute.  That is, I create a flag for the policy to be applied to the Computer object class, but there is no schema extension involved.  ARS keep the attribute tied to the AD object internally, and allows you to work with it as if it were any other property of the particular class.  This is cool because you can have someone toggle this setting to put the machine in as needed:


Having shown all this, I still need to point out that a CD-Burner or a USB device is not the only way to get data out of a building.  Most desktops still have a floppy drive (which is also covered by the policy), a printer (local or networked) and some additional ports in the back.  That parallel port can still take some older devices, such as those Iomega Jazz and Zip drives I used back in the day to make backups.  And then you have all sorts of other devices, like smartphones, that may use different drivers, as well as have cameras built into them to take ‘screen shots’ if push comes to shove.  If you know the driver to target, you can always disable it, but it feels like an arms race, to some degree.

After all of this, I’d probably suggest that you just look at something like ScriptLogic’s Desktop Authority for doing this (full disclosure – ScriptLogic is owned by Quest Software).  That tool may seem like overkill for this sort of task, but with all of the hoops one has to jump through to make it happen, it’s much simpler to use a COTS product, and get onto other things.  It won’t cover the ‘someone taking a picture of the monitor’ scenario’ but it holds up much better than my demonstration which was quite cumbersome to work out and deploy.  Plus, it will let you roll things out closer to ‘real time’ rather than waiting for group policies to be replicated and applied.

As for a list of references, there are a number that I could list, but this page was the most useful, not just for the article but for the comments as well: http://oreilly.com/pub/a/windows/2005/11/15/disabling-usb-storage-with-group-policy.html

The MS KB article that everyone references can be found here: http://support.microsoft.com/kb/555324 and this is where I got my ADM template.

{ Comments on this entry are closed }

As a follow-up to the last post, here is the full text of the session for another imported fusion disk.  I actually imported the whole thing using the datastore browser within vSphere, and then got rid of the Applications and appCacheList folders that Fusion creates.  Once the import was done, I logged into the ESXi host using SSH, and here is what I did.

login as: root
root@twesx01's password:
You have activated Tech Support Mode.
The time and date of this activation have been sent to the system logs.
VMware offers supported, powerful system administration tools.  Please
see www.vmware.com/go/sysadmintools for details.
Tech Support Mode may be disabled by an administrative user.
Please consult the ESXi Configuration Guide for additional
important information.
~ # cd /vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04
/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 # ls -al
drwxr-xr-x    1 root     root               2800 Dec 30 14:18 .
drwxr-xr-t    1 root     root               1680 Dec 30 07:38 ..
-rw-------    1 root     root         1625423872 Dec 30 07:51 dc04-s001.vmdk
-rw-------    1 root     root          862584832 Dec 30 07:58 dc04-s002.vmdk
-rw-------    1 root     root             327680 Dec 30 07:58 dc04-s003.vmdk
-rw-------    1 root     root         2144010240 Dec 30 08:15 dc04-s004.vmdk
-rw-------    1 root     root         1950875648 Dec 30 08:30 dc04-s005.vmdk
-rw-------    1 root     root             327680 Dec 30 08:30 dc04-s006.vmdk
-rw-------    1 root     root            1048576 Dec 30 08:30 dc04-s007.vmdk
-rw-------    1 root     root             327680 Dec 30 08:30 dc04-s008.vmdk
-rw-------    1 root     root             327680 Dec 30 08:30 dc04-s009.vmdk
-rw-------    1 root     root             327680 Dec 30 08:30 dc04-s010.vmdk
-rw-------    1 root     root             327680 Dec 30 08:30 dc04-s011.vmdk
-rw-------    1 root     root             327680 Dec 30 08:30 dc04-s012.vmdk
-rw-------    1 root     root             131072 Dec 30 08:30 dc04-s013.vmdk
-rw-------    1 root     root               8684 Dec 30 08:30 dc04.nvram
-rw-------    1 root     root                956 Dec 30 08:30 dc04.vmdk
-rw-------    1 root     root                  0 Dec 30 08:30 dc04.vmsd
-rw-------    1 root     root               2585 Dec 30 08:30 dc04.vmx
-rw-------    1 root     root               1623 Dec 30 08:31 dc04.vmxf
/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 # vmkfstools -i dc04.vmdk -d zeroedthick dc04-1.vmdk
Destination disk format: VMFS zeroedthick
Cloning disk 'dc04.vmdk'...
Clone: 100% done.
/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 # rm dc04-s0*
/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 # rm dc04.vmdk
/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 # mv dc04-1-flat.vmdk dc04-flat.vmdk
/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 # mv dc04-1.vmdk dc04.vmdk
/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 # vi dc04.vmx

During the vi session, I edit the one line that reads:

 RW 50331648 VMFS "dc04-1-flat.vmdk"


 RW 50331648 VMFS "dc04-flat.vmdk"

After all that, here is the end result:

/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 # ls -al
drwxr-xr-x    1 root     root               1120 Dec 30 14:34 .
drwxr-xr-t    1 root     root               1820 Dec 30 14:22 ..
-rw-------    1 root     root        25769803776 Dec 30 14:32 dc04-flat.vmdk
-rw-------    1 root     root               8684 Dec 30 08:30 dc04.nvram
-rw-------    1 root     root                518 Dec 30 14:35 dc04.vmdk
-rw-------    1 root     root                  0 Dec 30 08:30 dc04.vmsd
-rw-------    1 root     root               2585 Dec 30 08:30 dc04.vmx
-rw-------    1 root     root               1623 Dec 30 08:31 dc04.vmxf
/vmfs/volumes/4d1b685e-858363b3-5fba-0026b9799581/twdc04 #

That is it.  dc04.vmdk is now ready for use by ESXi as part of a new VM.  Hopefully, this is of some help to someone out there.

{ Comments on this entry are closed }

After a few fits and starts, I decided I needed a bona fide server to do everything I’ve been wanting to do with virtualization.  On Christmas day, I found a good ‘scratch and dent’ deal on a Dell T105 in their Outlet store.  And then . . . I waited.

Until today.  The server arrived, and I got right to work (whilst also doing some work with our latest acquisition, QCAP which is very slick, and can be found here: http://www.quest.com/cloudautomation/ ).  Initially, I thought I was going to use VMware ESX 3.5, but after some consideration, and the fact that I had 64-bit hardware, I opted for ESXi 4.0.  Which mean I had to build another USB boot drive, as I decided to use the internal USB adapter and dedicate the 2 SATA drives to being VM stores.

Up until now, I’ve actually been running 3 AD domain controllers at home, 2 of which were actually running on Macs using Fusion.  The third, which ran on a Windows XP laptop, was extremely quirky, and seemed to fall off the network quite a bit.  And this seemed to be the result of VMware’s bridged network settings within Workstation and Player.

After about an hour of configuring the hypervisor, configuring the AD integration, and getting comfortable with the environment, I decided to shut down DC02, and bring it over onto the server.  DC01 is my main server, and that was going to stay up and running for at least a few more days until I was sure this cutover was feasible, but DC02 and DC03 were fair game.  Surprisingly, the 6 GB disk took quite a while to bring over, even on a switched, and fairly idle network.  So after about 45 minutes, I was able to get started, and turn on the VM.  At which point, I got the following error:

Module DevicePowerOn power on failed. 
Unable to create virtual SCSI device for scsi0:0, '/vmfs/volumes/<long GUID>/dc02/dc02.vmdk' 
Failed to open disk scsi0:0: Unsupported or invalid disk type 7. Make sure that the disk has been imported.

With a few quick Google searches, I found that because I had the disk set for a total of 24 GB, but only used 6 GB, ESX did not like this.  There were quite a few posts on the topic, but this one was the most clear, and gave me exactly what I needed: http://blog.learnadmin.com/2010/09/solution-vmware-vm-import-failed-to.html .  Until I found this article, I knew I had to import the VM using the “zeroedthick” argument for the “vmkfstools” command but that seemed like a lot of work, and I didn’t see the setting in the import UI.  Thankfully, the article above let me know that I could SSH into the box (yes, I set it up for remote Tech Support), and run the following commands:

cd /vmfs/volumes/<long GUID>/dc02/ 
vmfstools -i dc02.vmdk -d zeroedthick dc02-1.vmdk 
rm dc02-s* 
rm dc02.vmdk 
mv dc02-1-flat.vmdk dc02-flat.vmdk 
mv dc02-1.vmdk dc02.vmdk 
vi dc02.vmdk 

I actually had to edit the dc02.vmdk file because of the line that reads ‘mv dc02-1-flat.vmdk dc02-flat.vmdk’.  I wanted to get rid of the -1 entry in the new file names.  There were a few other quirks during the ‘New VM’ dialog, such as the new VM had to have the same name as the old VM, but I got past it, and got everything set up.  One other thing I learned – don’t change the SCSI controller to the SAS one, but stay with the LSI Parallel one. I was hoping to use the ‘latest and greatest’ and got into a blue screen reboot.  After all this, the VM seems to have come up, and is running.  It now has a new NIC (ESX didn’t like the MAC address I originally created so I had to add a new card) and I’m going to wait it out a day or two before I do any more.

Feel free to drop me a line if you have questions, or have other suggestions.  I’ll keep updating this as I make progress in the conversion to using ESXi.

{ Comments on this entry are closed }

It’s been a while since I’ve posted, and its because I have a new role with new responsibilities.  I actually have a few posts queued up, but they will go onto another site, which I will announce later.

In the meantime, I’ve been trying to get some things worked out with hypervisors.  I primarily use a MacBook nowadays, but have a Dell D830 that I want to convert to virtual server.  However, I cannot decide on whether to use ESX/ESXi, HyperV or KVM.  And after chatting with one of my colleagues in the virtualization group, I do not have to.  He suggested I install each one on a USB stick (he said 2GB would be fine, so I got 3 4GB sticks) and then just use the drives in my D830 for the VMs themselves.  And he suggested using SSD (Solid State Drives) for the VMs to get better performance out of the VMs.

I thought about doing this about 3-4 years ago, but nothing was “ready for prime time” when it came to virtualization and laptops.  Rob M assured me this was no longer the case, and that my Dell was a very, very viable option.  Well, we’ll find out.

First off, I set out to try ESXi.  Since the Dell is 32-bit natively, I found this article on making the USB stick.  However, I’m on a Mac, so I had to modify the directions to fit my needs.  First, I downloaded ESXi 3.5, Update 5.  Then I used Finder to crack open the ISO, and then extract out install.tgz.  Once I did that, the extract of the tgz file wound up in my Downloads folder.  So the full path of what I needed was at (my username is dimikagi):


The USB drive happened to be /dev/disk2, and when I first ran dd, here is what I got:

twmac04:~ dimikagi$ sudo dd bs=1024 if=/Users/dimikagi/Downloads/install/usr/lib/vmware/installer/VMware-VMvisor-big-3.5.0_Update_5-207095.i386.img of=/dev/disk2
dd: /dev/disk2: Resource busy

I then realized I had to unmount the drive (but not eject it), so i just used Disk Utility to do it.  I then ran dd again, and here’s what happened:

twmac04:~ dimikagi$ sudo dd bs=1024 if=/Users/dimikagi/Downloads/install/usr/lib/vmware/installer/VMware-VMvisor-big-3.5.0_Update_5-207095.i386.img of=/dev/disk2
768000+0 records in
768000+0 records out
786432000 bytes transferred in 528.885199 secs (1486962 bytes/sec)

So that is where I currently am.  I have a USB stick with ESXi 3.5 and it should be bootable on the Dell.  On the Mac, it looks like it created 4 partitions, which is a positive sign. Unfortunately, the Dell is upstairs, and I’m heading out with my family, so we’ll need to see where this experiment goes next.

{ Comments on this entry are closed }

On a completely unrelated topic, I am helping my better half create a website for our daughter’s gymnastics team.  My hosting provider uses Fantastico, and I was able to give her a large selection of content management systems to choose from.  Which one did she select?  Joomla.  I thought it was interesting that of all the systems that are out there, and that she could selectthat she chose that one.  I’ll periodically post updates of how she does with the site, and what the gotchas have been, as I’m sure there will be a few.

{ Comments on this entry are closed }

I’ve been working with VAS for quite a while, and have gone through all the versions since 2.6, and this has to be the biggest thing I’ve seen in over 4 years of working with the prodct. And the big thing is not VAS (or QAS, as its now known) itself, but a free add-on call Identity Manager for Unix (IMU). You can download your copy from here.

And the cool thing is that you can use the product without buying VAS.  What is it?  Its a free, web-based console for managing unix, linux and mac users & groups.  Obviously, if you buy VAS, you get a lot more functionality, but just the core functionality alone makes it a cool download.  If you have more than 2 unix boxes, this makes life a lot easier.  You can now assess all your *nix boxes, get a list of all your users and groups, and make changes right there, in a browser window.

And how do I know its cool?  Because I was on-site with a customer that had been evaluating VAS 3.5 for about a month, and they confirmed it.  They were going to have me go through and show them all the commands, tips & tricks and refresh them on all the things I’d shown them the month before.  Well, after installing IMU, and running through how it worked, they simply replied with “we got everything we need.  You answered all the questions we had with this console, and we feel pretty good that we can drive everything through this instead of the command line.” And that was the goal . . . make unix account management easy to drive from a single point, with no need to script or even log onto multiple boxes. Everything is dead easy . . . and did I mention its free?!?!?

{ Comments on this entry are closed }

What happens when Texas meets technology? You get this:

I haven’t posted in a while, but came across this, and found it very, very amusing.  I’m not sure that I’d buy a trucker hat, no matter it says, but the mugs are cool – I like mugs.  And “word has it” that more things will be added . . . I’ll be watching.

{ Comments on this entry are closed }

I just wrote a very long email to a client describing how VSJ supports Federation and thought it would help those looking for a simple (albeit long) explanation of Federation. At some point, if I have some time, I’d love to put together some animations to show all this and cut down on the verbiage. But this is what you get in the meantime.  Its actually an email thread that I’ve purged of customer names and references. Enjoy.

From: Joe at Acme
Sent: Thursday, July 08, 2010 16:10
Subject: SSO for Java


[...] sorry to be duplicating my questions but I lost my notes from our June 8 conversation.  As I read the SSO For Java specs, it describes an integration of JBoss and JAAS environments with AD content.  I don’t know the Java world that well any more, but will the java environments have the security context information that is supposedly in the Windows “Claims Aware Programming” environment for Windows developers?

Information like what LOA was used in authentication, what identity credential was used, what is the user’s rank that was passed in the identity federation profile, was the rank passed via an encrypted path, was the encryption network level or was it an end-to-end message-level approach, etc etc.


From: Dmitry Kagansky at Quest
Sent: Friday, July 09, 2010 12:00 AM
To: Joe at Acme
Subject: RE: SSO for Java


Let’s do a quick summary, and also send you some additional information since our discussion.
1. VSJ has lots of flavours.  Of note:

* The ‘Standard Edition’ is the most common one, and is usable in all comercial app servers (WebLogic, WebSphere, Tomcat, JBoss, Oracle App Server, JRun, etc).  Its most common uses is as a servlet filter or as a JAAS authentication module.

* There is a ‘JBoss Edition’ which allows VSJ to be installed as a valve.  A valve is specific to Tomcat and JBoss, and if you have an app that requires a valve for authentication, then you’ll need to use this edition.  Otherwise, you can make use of the standard one.

* There are other editions that provide custom connectors for the specific platforms, just like the JBoss one.  For example, with the WebSphere edition, you can install VSJ as a TAI (Trusted Association Interceptor) and be able to consume LTPA (Lightweight Third Party Tokens) which are proprietary to Websphere.  Just like the JBoss edition, if you don’t need a TAI, then you can use the standard edition.

2. In all of the editions, except for the one below, a user comes in with Kerberos, NTLM or Basic Auth credentials (depending on the config), gets authenticated against Active Directory, and then has a Java Security Principal created within the application.  The authentication mechanism (servlet filter, valve, TAI) dictates which version you need to use.

3. Now, for the oddball, which is for Federation and claims. There is a ‘Federation Edition’ which ships with the Standard Edition, but is a separate set of jar files, and supports ADFS 1.0.  ADFS 1.0 is SAML (1.x) claims and tokens.  Should you want SAML 2.0 support, then MS provides a SAML 2.0 to 1.x adapter allowing you to use VSJ with your java apps to receive either SAML 1.x or 2.x tokens.  With SAML 1.x, your encryption is using SSL, and the schemes used are whatever you specify in securing the site with an SSL cert.  The web server is the one responsible for providing the security.

With SAML 2.x, the payload itself is encrypted before it is sent out.  It is still encrypted using a cert, and what type you select determines the encryption level which then allows you to send everything over port 80.  While the data may seem like its in the clear, because the encryption happens before the transmission starts, it is still jibberish going across the wire.

That should cover a good amount of the conversation we had last month.

Now, for some new information.  We have since released a version of Webthority, and that version supports using VSJ in both the front and back end.  What is Webthority?  It is a reverse proxy which can secure your applications but proxying the content, rewriting URLs and managing a session, as well as providing Single Sign On to numerous apps, using numerous authenticators.  What that means is that you can use it to log in with LDAP credentials, a smart card or certificate (PKI), a Kerberos ticket, a database login, or a SAML token, and establish a session across multiple applications through a common ‘gateway.’  Its a way to consolidate your URLs as well, where you can go from:




to something like:




You can consolidate those URLs, consolidate SSL certificates and use something called ‘protocol transition’ (which is built into VSJ) to go from one set of credentials to a set of Kerberos credentials.  This is all within Webthority, and can be used in conjunction with VSJ as well.

We have also made our own STS which not only provides SAML (Federation) support, but also supports something called ‘JIT Provisioning.’  The best thing to do is to check out these blog entries by the Product Manager for ActiveRoles Server where he describes this new functionality here:

I’m sure its all a lot to take in, so feel free to shoot back any questions you may have.


From: Joe at Acme
Sent: Friday, July 09, 2010 07:11
To: Dmitry Kagansky at Quest

Subject: RE: SSO for Java

Thanks for the write-up.

I still am fixated on claims-aware programming.  It sounds like it is nothing more that providing a set of APIs for the application developer to use in making (access) decisions about a user?  Some of the claims will come to the application directly via a security token like SAML, and others are a part of the OS environment that one uses the APIs to get to?  If ADFS 2.0 is in use for abstracting the Identity Federation away from the app developer, I would think that the application would not see any SAML security tokens?

So then does the Java Security Principal (when VSJ is in use) provide all the claims a developer could want, including all the security context information?  No difference then between what a Windows developer (with ADFS 2.0 handling the Identity Federation for the enterprise) has access to and what a Linux Java developer has access to?

If XXXX is on a track for QAS with Oracle’s OIM suite (OAM, OIF, OES, OVD, etc), and if they are also a Windows AD shop with ADFS 2.0 also available, then maybe QAS + VSJ would make more sense than going the Webthority route?

Joe at Acme
From: Dmitry Kagansky at Quest
Sent: Friday, July 09, 2010 18:06
To: Joe at Acme
Subject: RE: SSO for Java

Here are the short answers, and you can read the write-up below for more details:
- With SAML, the operation is pretty binary.  Claims are put into a token, and the app can either access the claims or ignore them.  Its not flexible enough to make decisions like you describe.

- The app itself should not know or care about SAML; it is another abstraction, just like VSJ is a Kerberos “authenticator” that is put in front of the application.  Once the user gets past VSJ, it shouldn’t matter to the app how the user got there.

I actually think you’re expecting way more of claims than they really are.  You’re buying into ‘the dream’ and some of the ‘marketecture.’ And that’s not a bad thing, but lets look at what this means practically.

First, let’s level set, and define some terminology so we’re on the same page. What you have today is:
- Federation: This is an abstract term.  In my mind, this is just a way to separate management of resources from management of accounts that can access those resources.  In most cases, this comes as a result of two different organizations wanting to share resources, and allow accounts from one org to access resources from another org.  The caveat is that the org with the resources trusts (in some way) the account org, and accepts statements (or ‘claims’) made about a user by the account org.**

- SAML: This is a generic term that is used to describe anything from the notion of Federation down to the actual token sent during the authentication/authorization action.  It can be a protocol, a mark up language, the actual token, and a standard.  I get way too many questions about “do you support SAML?” which goes into a very long winded discussion.  So let’s discuss the key point, which is the protocol – there are 2 main flavours; 1.x and 2.x.  They are not complementary, and are competitive.  There are subtle differences in the syntax, but the big difference is what I outlined below. SAML 1.x is “in the clear” and its up to you (the sys admins/app managers) to secure it.  So you have to encrypt the channel, typically with SSL. SAML 2.x, on the other hand, encrypts the content –before– it is transmitted.  So even if the channel is wide open, and visible, everything is still jibberish and decrypted at the other end.***

- Certificates: These are used for all sorts of things, but in the context of this conversation, they are used to trust the organizations discussed above.  Obviously, there’s a public and private key set up, and the 2 orgs that are federating perform a key exchange at some point early on in the agreement.  This key exchange forms the Federation Trust between the two orgs, and validates a user from one org to the other.  So when you ask about encryption, the answer is almost always “whatever types of certificates you chose to use.”  There are some limitations, but most certs use standard encryption types.

So what does a Federated transaction actually look like?  Here’s a high level example.  Let’s pretend for a minute that Acme has some website called ‘partners.Acme.org’ and on that site, trusted partners can log in, and access information that Acme provides to their partners.  At the same time, Acme does not want to manage and maintain lists of users to access the site.  Job changes, turnover, and other factors lead to Acme telling their partners – “anyone that works for you that has a certain role (say, Marketing Manager) will be allowed into the site.  It is up to you, Mr Partner, to properly provision and deprovision your employees, and we trust anyone you send over that you claim to be a Marketing Manager.”

And now let’s say Quest is such a partner.  Because I work for Quest, I may (I stress –may–) have access to that site.  As long as I come from the Quest network (which can be confirmed by the certificates exchanged earlier) and the claim that Quest sends on my behalf reads ‘Dmitry Kagansky, Marketing Manager,’ I will be allowed into the site.  That’s all we’re really talking about here.  Quest makes some claims about me, and Acme trusts Quest’s claims.  If I leave, then Quest deletes my account, and I no longer have access to the Acme site.

That’s a high level overview.  Now, looking at what happens in a Java server, when someone authenticates, a “thing” (an object, a constructor, etc) gets created for the user called a Java Security Principal.  In that Principal are all sorts of information about the user that just logged in.  As you say, its the security context, and how it is generated should be irrelevant to the app developer.  And part of the information in the Principal is a list of all the roles the user has.  What VSJ provides is the ability to take the claims from someone’s SAML token during a Federated exchange, and put them into that list of roles.  So as a developer, you can now write code that says “if the user has a role of ‘Marketing Manager’ you are allowed to open this file.”  From an app standpoint, it should not care whether the person authenticated with a SAML token, a Kerberos ticket, or through carrier pigeon.  Somehow, the user got in, through a trusted access method, and they are here.  So you are right that the application should not know or care about the SAML token.  But the part about being able to ‘blend’ claims from the token versus the OS environment, that’s still a bit difficult and is not something that can be easily with SAML.

Which leads to what people want Federation to become.  There’s talk that SAML and claims are not enough, especially because the org with the resources wants to do more than just accept or reject claims. They want conditional things to happen.  They want ‘extensible APIs’ as you mention.  They want lots of things that are not yet part of SAML.  And if you search around, you’ll find something called XACML (eXtensible Access Control Markup Language) which, like SAML, is both a construct/token as well as a protocol and (sort of) an API.  Personally, what I see happening is that SAML will assume authentication responsibilities and XACML will take over the authorization duties but right now its a very, very hazy area.  SAML is here now, and ready for use (albeit somewhat limited) and XACML is still a few years off, but plenty of vendors are starting to support it and its starting to pick up some steam.

Finally, where does Webthority fit into all this?  Well, Webthority is a reverse proxy.  And it allows for multiple authentication sources.  And given that Federation is in such a state of flux, VSJ (with ADFS support) may not be enough to do it all.  You may still have cases where people need to log in using LDAP credentials.  Or they have a login in some database somewhere.  Webthority can provide SSO for those users, along with the Federated user.  And it actually provides a managable interface to control all these settings rather than writing lots of authentication code.

Whew – that’s quite a lot to take in on Friday.  Hopefully, this wasn’t too much for you, and it wasn’t too pedantic.  The short of it is that VSJ can provide the same SAML functionality for Java applications that Microsoft provides for their Windows apps.  And we do this using the same Microsoft plumbing, so there’s very little to add if you are already using ADFS (1.x or 2.x) and want the same functionality for your Java apps.  And, if you have (web) apps that you don’t want to overhaul, or that don’t use a Java Security Principal, Webthority may be a pretty good alternative as well.  Plus, VSJ can be used with Webthority so you can support the new (Federation, SAML, Kerberos) with the old (DB logins, LDAP, NTLM, etc).

** Note that Federation often happens internal to an organization, and can be used just to segregate resource from user management.  It does not have to be 2 different orgs, but that is where the origins come from.

*** as an aside, when SAML 1.x came out, was adopted by Microsoft (and IBM to some degree) in ADFS 1.0.  SAML 2.0 was published a few years later, and was supported by the ‘anything but Microsoft’ crowd (The Liberty Alliance).  Since then, Microsoft has put out Geneva, which is their codename for ADFS 2.0, and now support both SAML 1.x and 2.x.


{ Comments on this entry are closed }