Thursday, 10 June 2010

Time for a New Search Engine?

Today Google UK is putting irritating background images on their Home Page, and to turn it off you need to log in. http://www.google.com/ is not doing this (today, at least), but when UK users try to navigate there, they are redirected to http://www.google.co.uk/ and the obnoxious image. The work-around is to navigate to www.google.com/ncr (No Country Redirect).

If this isn't gone tomorrow I'm going to change my browser Home Page from Google to http://www.alltheweb.com/. The reason that I and many other IT people moved to the Google search engine in the first place, years ago, was because it loaded very fast. It had a simple clean page with no bells and whistles. Processing a logon account and loading an image does not make for a fast load anymore.

But the thing that really annoys me is that there is no way to send my negative feedback to Google about this feature, or if there is, it's really well hidden. Even their "Contact Us" link doesn't provide an e-mail address or a form for this, and I don't feel like sitting on hold on the telephone.

To change your Home Page in Internet Explorer:
  • Navigate to http://www.alltheweb.com/ (or other preferred page).
  • In the top right of Internet Explorer menu bar, click Tools, then Internet Options.
  • In the Home Page section, ensure that the new web site is shown, and then click Use Current and then OK.

Monday, 7 June 2010

Larry Ellison on Cloud Computing

I agree with most of what he says. "Cloud Computing" is simply a new term for what has been widely available for over a decade. There are some newer services available via the web, or cloud, and it does provide some new business models for IT in terms of renting resources and services in smaller increments only when you need them, but for the most part it is a fancy new buzz-word.

Wednesday, 21 April 2010

Social Engineering and 2-Factor Authentication

Overview
This article will describe the low-tech hacking method known as “social engineering” and provide a script scenario of how this might work. It will then describe some methods of protection against this, and how a strong company policy is required for such protection.

We have all heard about the dangers of hackers entering our corporate computer networks by using sophisticated (or not-so-sophisticated freely-downloadable) tools to hack in past network firewalls, and exposing sensitive corporate data to the competition or the outside world. We may also have heard of malware, or the more malicious crimeware, which has the purpose of performing identity theft. It runs on a corporate workstation and gets there either via the web browser or phishing e-mail.

If someone was motivated to enter your corporate network using these methods, however, they would need advanced network engineering (i.e. hacking) knowledge or some half-decent programming skills. Fortunately for the data thief, however, access to your network can be obtained using none of these methods, and the only tool needed would be mediocre sales skills.

Social Engineering
The thief does a Google search on his target company, ABC Corp. He finds a corporate partner, XYZ Ltd., on ABC’s “Partner” page on their web site and the name of a director from their “About Us” page. He then goes to XYZ’s web site and obtains the name of any director or employee. He phones ABC corp.

Call #1
Thief: Good morning, this is Adrian Andrews, from the IT department from XYZ Ltd. One of my directors, Brian Bookman, needs me to FTP some information to Charlie Crisp, your Finance Director. Could you give me the name and e-mail address of one of your IT people so that I can request the technical details from them by e-mail?

Receptionist: Certainly Mr Andrews, you can contact Duncan Drisedale at ddrisedale@abc.com.nul.

The thief digs a little deeper on the Internet to see if he can find any new people starting or transferring to ABC Corp. If he does, it’s a bonus, otherwise, he just gets as many employee names as he can, about a dozen or so. He would probably get this from the downloadable annual report.

Call #2
Thief: Good afternoon, may I speak with Eleanor Ewing? She’s just started so I don’t have her extension yet.

Receptionist: Certainly Sir.

Eleanor: Eleanor speaking.

Thief: Good afternoon. We haven’t met yet; I’m Duncan Drisedale from corporate IT; you probably see my name on the phone list. I just want to make sure that you understand the remote access procedures for getting into the network from home. Did one of my people show you this, or did you get the instructions?

Eleanor: Oh yes, I was shown last week.

Thief: Would you mind terribly if we went through the procedure now, just so you’re completely comfortable with it? We’re trying to reduce out-of-hours support calls when getting access is more urgent than a relaxing trial run during the week. Perhaps you could just run through the steps, telling me exactly what you’re doing each step of the way. I’ll stop you if anything needs clarifying.

Eleanor: OK, this is what I’m doing, as per the sheet: step one [details].... step two [details]..., etc...

Thief: Excuse me, just checking, are you typing “http://url” or “https://url” [with more detail].

Eleanor: No, I’m typing “https://FullURL/etc” [details]

Thief: And tell me, are you entering your user name or e-mail address? Could you please confirm for me exactly how so I can be sure you have the correct syntax?

Eleanor: Yes, its.. [details]. Oh, I’m in! Thank you very much; this will be useful.

Thief: You’re quite welcome. Goodbye.

Call #3
Thief: Good morning, may I speak with Frank Feilding?

Frank: Frank speaking.

Thief: Good morning, this is Duncan Drisedale from corporate IT. Listen, we had a system glitch last night and I see that you have used the remote access system at least once before. We need to do a test, but we can’t do that using our IT administrative accounts; we need to use a real user ID to test the full business functionality. Would you be able to help us? It would take five minutes of your time.

Frank: Sure.

Thief: OK, what I’m going to do is test the remote access from here, but logged on as you. I have the procedure here; could I just verify that you would do this the same way as I have documented?

The thief goes through the whole remote access connection procedure correctly, talking out loud as he goes, as he learned it from Eleanor. It is a familiar procedure to Frank. Finally at the log on...

Thief: OK, your user name and password? Is it “ffeilding@abc.com.nul”? and?

Frank: Yes, and the password is [password].

Thief: [Humming and hawing] Yes, it all seems to be working fine. I’m logging off now. Thank you very much for your time.

The thief now has Frank’s user name and password, and he knows how to use the company’s remote access system. No hacking or programming required. Admittedly, the scenario above is best-case for the hacker, but if the line doesn’t work on user #1, he has another twelve or a hundred user names to try these two calls on. Some may not give out the information, but out of a dozen, it’s safe odds that one will.

He can now log on as Frank at leisure to ABC network from an Internet café (virtually untraceable computers rented using cash) and download all the corporate secrets Frank has access to.

While logged on, if the thief had an extra layer of IT sophistication, he could download and install a password cracking tool (search Google for "windows password cracking" and you will get 121,000 hits) to get passwords of system administrative accounts and have access to every file on the network. This could take as little as another hour for an amateur to accomplish.

This is social engineering hacking. Obviously a smooth voice and some sales techniques help, and there are many variations to the scenario shown above. It boils down to the thief getting on your network not through machines, but through people, who are fallible.

Two-Factor Authentication
Two-factor authentication is something the end-user knows and something he possesses. This is where the person possesses a physical token, often in the form of a key fob that either shows a constantly changing number to be typed in, or contains a USB key that plugs into most computers. For the user to gain access to corporate resources, they must provide something they know, their user name and password (and the remote access procedure in the first place); and something they possess, the number on the token or the physical key. The “possession” piece could, in fact, be a corporate laptop. It is possible for remote access connections to be verified to be coming from these laptops only, and thus not allowing access from Internet cafés or corporate computers at client or partner sites.

This is the technology that can prevent social engineering attacks. The scenario above wouldn’t work, as the thief would also require a hardware key to log on. In the case of an RSA key fob that displays digits, he may just be able to get those numbers read to him by an unsuspecting user one time, but it would be unlikely he could get it twice. End-user training is also required.

Corporate Policy
A senior executive, Gerry Garabaldi, is on the road and cannot access the corporate intranet. He calls the IT Help Desk and Henry Hooper, who only started here two weeks ago, answers the call. It is possible for an IT administrator to over-ride the two-factor authentication and provide a temporary PIN to a user over the phone.

Gerry: I’m in Milan at DEF Corp. on one of their workstations, and I can’t log on to our corporate intranet.

Henry: OK, sir, I can reset your Windows password to [password]. Now if you could try again, please?

Gerry: Hmm, it’s still not working. It keeps asking me for this secondary password. What is that?

Henry: Ah yes, that is where you enter in your key fob number.

Gerry: Oh no, I’ve forgotten it! You’re going to have to disable that or over-ride it, or something.

Henry: I’m sorry, sir, I really cannot. You could be anybody at all trying to gain access; I’m not allowed to let anyone in with no credentials in this manner.

Gerry: Now you listen to me! If you don’t get me onto that site in the next fifteen minutes, we’re going to lose a ten million pound deal! Let me speak to your manager right now!

So what does Henry or his boss do? Does he allow the person on the phone in? If he does, he could be allowing a hacker onto the network who has just successfully social-engineered his way past two-factor authentication security, thus rendering it useless. If he does not, he could be holding up a huge deal Gerry is about to make.

The policy can only be decided by the business users of the system and, as can be shown by the example, must be approved and backed by the highest level of management. Without such a policy, written down and published, many security solutions become useless.

Saturday, 13 February 2010

Group Policy Preferences, Options Considerations

Overview
Group Policy Preferences (GPPs) first came out with the Windows Server 2008 Group Policy Management Console (GPMC) introducing an array of configurable items not available in earlier native policy objects. Microsoft calls these “preferences” or “unmanaged settings” because after the preference has been applied, the end-user can change it back again.

In my opinion, however, these aren’t the best terms to apply here. The next time the Group Policy Objects (GPOs) update again, perhaps the default of 90 minutes or the next time the user logs on, or the computer reboots, the preferences get applied once more. When I first heard about GPP, I thought this was a perfect way to set all those things we used to modify in the Default User profile, for example, the Internet Explorer Home Page. We would set that to the corporate intranet in the Default Profile, new users would get that setting, but afterwards they could change their Home Page as desired and it would remain there.

One of our specialties is making applications (perhaps historically misbehaved applications) work well with Terminal Services and Citrix. We often used the Default Profile method of setting minimum registry entries required by an application (if Terminal Services “install mode” didn’t quite work), but that must be customisable by the end-user through the front-end Graphical User Interface (GUI). Managing the Default Profile, however, can be messy, and Microsoft doesn’t really like that practice anymore. In fact, you can’t even overwrite the Default Profile in Windows 7 anymore without going through a bit of a procedure. At least one Microsoft employee has posted on a TechNet forum that overwriting has not been supported by Microsoft since Windows XP/2003. It’s not hard to manually add registry entries to the Default Profile, but I thought these  new GPPs would save us from all that.

But if GPPs get reapplied at GPO update time, how can we re-create this requirement of “suggesting” settings, but allowing the user to permanently change them? Microsoft has an answer to this: the Apply once and do not reapply setting of a GPP, available in the Common tab. This works fine and does exactly that. There are management problems, however, and some unexpected behaviour when using this feature. It’s still a step in the right direction, and it is better than modifying Default Profiles, but you need to be aware of these “glitches” to keep on top of settings defined this way.

This article will delve into this Apply once and do not reapply feature, demonstrate its shortcomings, and provide some suggestions in overcoming them. It will also explore the Remove this item when it is no longer applied feature and discuss its particular dangers.

GPP Review
No software or services need to be run at the back-end nor does your Active Directory (AD) domain need to be running any Windows 2008 Domain Controllers in order to implement Group Policy Preferences (GPP). You must, however, deploy the GPP client-side extension (CSE) to any client computer to which you want to deploy preferences. The CSE is available as an optional update in Windows Update, or it can be downloaded from the Microsoft web site and installed manually. There are versions for the following operating systems:
·         Windows XP with SP2 or later
·         Windows Vista
·         Windows Server 2003 with SP1 or later

Windows Server 2008 and Windows 7 already include the CSE. You must also install the Microsoft Remote Server Administration Tools (RSAT) on the workstation from which you will be managing the GPPs, or use the Group Policy Management Microsoft Management Console (MMC) snap-in on a Windows Server 2008 server.

In the Group Policy Management Editor, the Preferences section looks very much like the ScriptLogic Desktop Authority software, a fairly common third-party solution used to fill this historical void in Group Policy.
The action of creating a new entry under any of these Windows or Control Panel Settings will launch an entry GUI that differs depending upon the type of entry. For example, adding a new mapped drive will look like this:
We’re going to be using a custom registry entry as an example later in this document, so let’s take a look at its GUI:

Let’s create something to work with.
a.       First I installed the Microsoft RSAT onto my Windows 7 workstation.

b.      On my Windows 2003 AD domain I created a “TestGPP” user account in the “Users” Organisational Unit (OU) and also created a “TestGPPgroup” security group for later use. The user is not a member of this group at this time.

c.       I created a “TestGPP” GPO and linked it to my “Users” OU.

d.      I then removed the “Authenticated Users” group from the GPO’s Access Control List (ACL) and added only my test user account, “TestGPP” to the ACL with Read and Apply rights so that I wouldn’t accidently mess anyone up except my test user.

e.       I then edited the new GPO and added a new Registry item preference. In this example, I’m going to set the user’s Desktop background colour to be red. I expanded User Configuration, Preferences, Windows Settings, and selected Registry. then I right-clicked and added a new item, getting the screen similar to that shown above.

f.       The hive and path are “HKCU\Control Panel\Colors”, the value name is “Background”, the value type is “REG_SZ”, and the value data is “255 0 0” (red).

g.      My AD site here only has one Domain Controller (DC),  so I did not need to synchronise my domain in order to begin testing.

Remember where GPO values are stored?
h.      In the Group Policy Management window, highlight the GPO in question and select the Details tab. Note its Unique ID.
i.       In Windows Explorer, navigate to “\\DomainName.Name\SYSVOL\DomainName.Name\Policies”. (Replace “DomainName.Name” in the path with your own fully qualified AD domain name.) Then navigate to the folder with the same name as the Unique ID as noted above.

j.      Under that folder, navigate to either the “Machine” (for Computer Configuration) or “User” (as in the example above, for User Configuration) folders, and then to the “Preferences” sub-folder. This folder may be empty until some preferences are defined. Mine has a folder called “Registry”, which has a “registry.xml” file in it.

k.       When I edit that file with a text editor such as Notepad, I see contents similar to this (I have added hard returns for readability):

name="Background"
status="Background"
image="7"
changed="2010-02-10 14:13:09"
uid="{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}"
bypassErrors="1">
displayDecimal="0"
default="0"
hive="HKEY_CURRENT_USER"
key="Control Panel\Colors" name="Background"
type="REG_SZ"
value="255 0 0"/>

l.      If I were to add more GPP types other than Registry, I would see other folders under that GPO folder such as “Drives”, “Files”, “Environment Variables”, etc., each with their own “*.xml” file.

m.   When I log onto my computer as user “TestGPP”, my Desktop background has been successfully set to red.

Results of GPP Options
With the standard actions and options of the GPP defined above, the test user can change their background colour to something else, but the next time they log in, it will be red again. There are certain other types of GPPs that will be reset back to the value defined every time Group Policy refreshes, which is every 90 minutes by default (a background colour change requires a re-logon to take effect regardless of when the value changes).

What happens when the GPP is disabled or removed? I’ll create this scenario by disabling the specific preference, as shown below. We don’t have to disable the entire GPO.
With the default settings, and assuming that the user has not changed their background colour, it will remain red until they change it. I logged back on with my test account and it was indeed red. I changed it to brown, logged off and on again, and it remained brown.

Remove this item when it is no longer applied
If we look at the properties of the GPP, we will see that the default Action in the General tab is to “Update”. The difference between “Update” and “Replace” is that first “replace” will remove all values before re-creating them. If we go to the Common tab and select the Remove this item when it is no longer applied option, we’ll get a warning that it will also set the Action to “Replace”.
I enabled the GPP again, set the options as discussed, and logged on as the test user to ensure that it was taking effect again. Then I disabled the GPP and logged back on as the test user. The background colour was now black, or 0 0 0, the colour set in the Default Profile of this computer. Also, if I manually look at the registry key, “HKCU\Control Panel\Colors”, there is no “Background” value (there was in all previous sessions). There won’t be until I set the colour to something else. This proves that the “Replace” Action does what it’s supposed to do. The GPP CSE runs and analyses GPPs even if they are disabled. This behaviour would also occur if the entire GPO was disabled or filtered from being applied to that user.

This is important to remember. Consider the following scenario.
a.       A user has set some preferences within an application, perhaps required to make it function at all (maybe setting up an ODBC DSN, for example), that results in some registry changes in their profile.

b.      IT decides to control this centrally by creating those preferences in a Registry GPP, and selects the Remove this item... option described above.

c.       Time passes and for some reason IT decides to remove or disable the GPP, or maybe the computer has lost communication with the DC and GPOs do not run.

d.      The end user not only loses those controlled settings, but they also lose the settings they had originally manually set, and receive generic settings from that computer’s Default Profile. In a bad case, this could even break the application and result in a call to the Help Desk.

There are exceptions. Not all GPP types can have this option set.






























Notes:
·         I could not create an Applications GPP to test. I don’t have any application preference plug-ins installed. Note from the Microsoft TechNet library:
Group Policy includes the Applications preference extension. For users, this extension allows you to configure settings for a specific version of an application for which you have installed a preference plug-in. The available settings vary with the application and version.

Software developers can create plug-ins for other applications using the Group Policy Software Development Kit (http://go.microsoft.com/fwlink/?LinkId=144).

·         For the Devices type, the Remove this item... option is only available if the Action is set to “Disable”.

·         When selecting the Remove this item... option for the Local Users and Groups type, you get two options: it will set the Action to “Replace” in order to completely remove the group, or to “Update” to simply remove a member. Keep this in mind if the GPP is to add a member to a built-in group such as “Administrators”. Would the GPP be able to remove that group if the Action is set to “Replace” and the GPP falls out of scope? I was unwilling to test this on my computer.

·         With the Power Options GPP type, you get a warning saying that you cannot select the Remove this item... option if you try to, however, if you click OK it does seem to take effect.

The bottom line is to use this Remove this item when it is no longer applied option with care. Understand the ramifications of removal if the user or computer loses access to the GPP.

Item-level targeting
This option is mentioned here before the next one merely to remind us of its capabilities, since it will feature in the upcoming example scenario. There are no hidden surprises in this feature, so I won’t be expanding upon it. Basically, you can filter if the individual GPP gets applied depending on a range of criteria, including items such as AD site, IP subnet, user group membership, OU, and many others. (It would be best practice to write a plain English summary of the criteria in the Description field of the GPP.)
This targeting is on top of the targeting that is being applied at the entire GPO level, such as with OU links or ACLs.

Apply Once and do not reapply, Issue One
As its name implies, this feature will only apply one time to a computer if it’s a Computer Configuration, or one time to a user’s profile if it’s a User Configuration. How does the computer then know not to apply this GPP anymore? This option comes with its own special bag of tricks. Let’s proceed with setting it and see what happens.

a.       I modified the existing “TestGPP” GPO I created in the “GPP Review” section above, the one that sets the user’s Desktop background to red (255 0 0). I set the Action back to “Update” and cleared the Remove this item when it is no longer applied option.

b.      I then selected the Apply Once and do not reapply option.

c.       After the user logged on one time and received this GPP again, a new registry value got created in the user’s profile, under HKCU. Note the value created for my test user:

key: HKCU\Software\Microsoft\Group Policy\Client\RunOnce
value name: {11950636-4A63-46A1-9A52-3854F61C6149}
value data: empty

If this was a Computer Configuration as opposed to a User Configuration, the registry value would be under HKLM. For the duration of this article, I will refer to this registry value as the registry “flag”. This is a flag that tells the GPP CSE not to process that GPP again.

d.      Note an additional entry in the “registry.xml” file of that GPP, located in the domain’s SYSVOL folder, under the GPO’s GUID folder (discussed in detail in the “GPP Review” section above):


Notice the matching numbers. Next time when the user logs on, the GPP CSE will still process that GPP. However, it will not apply values of the GPP because the ID number in the user’s profile (the flag) matches the “RunOnce” ID in the “registry.xml” file.

The usual GPO or scripting behaviour applies when modifying settings and this Apply Once and do not reapply option is selected. If it is a Computer Configuration GPP, the GPP will run once for each computer the first time it receives this GPO. If it is a User Configuration GPP, it will run once for that user’s profile the first time they receive the GPO. This is not to say that it will run once per user. It depends on the profile scenario in place. If Roaming or Terminal Services profiles are in place, then it will run once for that user, but if local profiles only are used, it will run once per user per computer (or per profile).

All right, let’s get fancy. Let’s apply Item-level targeting to this GPP and let it apply only to members of the “TestGPPgroup” user group. Note that our “TestGPP” user is not yet a member of this group.

a.       First, let’s set the user back to as if this GPP had never run. We will do this manually for now. I logged on as the test user, changed the background colour to brown again, ran “regedit.exe” and removed the flag “HKCU\Software\Microsoft\Group Policy\Client\RunOnce\{11950636-4A63-46A1-9A52-3854F61C6149}”, and then logged off.

b.      Then I modified the existing registry GPP in the “TestGPP” GPO. In the Common tab, selected Item-level targeting and clicked the Targeting button to add criteria.

c.       I added a Security Group criteria where the user must be a member of the “TestGPPgroup” group, and then closed the GPO Editor.

d.       I logged back on as the test user and, as expected, my background was still brown. The GPP had not changed it to red. But wait! The test user received that registry flag once again with the ID of the GPP not to process.

e.      I then decided that I wanted the “TestGPP” user to be a member of the “TestGPPgroup” group so that he has to have a red background on his Desktop. I added that user to the group.

f.       I logged on as the test user and still saw a brown background. The GPP did not apply, even though I was now a member of the criteria group.

This behaviour is important to remember. The fact that the user had that registry flag of the ID of the GPP not to apply took precedence over the group membership of that user matching targeting criteria. This makes sense. The next time the user logs on, or Group Policy updates, the user will probably still be a member of that group, yet if it was a valid “apply once” the first time, we would not want it to run again. (What would seem to make more sense is if the GPP CSE didn’t add the registry flag if the contents of the GPP did not, in fact, get applied in the first place.)

You cannot pre-configure a GPP, set it to Apply once..., and set targeting criteria where all future intended recipients do not currently match that criteria.

Issue One Work-Around
There is a work-around if you need to reapply the GPP to intended users, if it is possible to reapply it to all users that match any targeting criteria or who are within scope of the GPO filtering. Reset the Apply Once and do not reapply option. Un-check it from the GPP and click Apply, then check it again and click Apply. This will generate a new “RunOnce” ID for the GPP, as can be seen in the updated “registry.xml” file.

If it’s not possible to run the GPP again for all users, it gets a bit messier. You would either need to manually remove the existing ID from the user’s “HKCU\Software\Microsoft\Group Policy\Client\RunOnce” registry flag or in fact create an “Apply once” GPP to do this, targeting a new specific group called “RemoveTestGPP”, or similar.

Apply Once and do not reapply, Issue Two
You can copy and paste GPPs from the context menu.

a.       I simplified our existing registry GPP in the “TestGPP” GPO. I removed Item-level targeting.

b.      I then right-clicked on it and selected Copy, and pasted it in the same area. I modified the registry value, namely “Active Title” to red, 255 0 0.

c.       Look at the “registry.xml” file now. I have removed the irrelevant bits for this point.

name="Background"
  id="{11950636-4A63-46A1-9A52-3854F61C6149}"/>
name="ActiveTitle"
 id="{11950636-4A63-46A1-9A52-3854F61C6149}"/>

Note that the “RunOnce” IDs are the same for both values.

This means that though this is a brand new value to modify, because the “RunOnce” IDs are the same and because the user has that ID in their registry flag from our exercise above, this new value will never be applied to that user.

Issue Two Work-Around
Similar to the work-around for Issue 1, reset the Apply Once and do not reapply option. Un-check it from the GPP and click Apply, then check it again and click Apply. This will give the new GPP a new “RunOnce” ID, as can be seen in the updated “registry.xml” file.

Apply Once and do not reapply, Alternatives
Repeating my hopes in the introduction, I was really glad to hear about this Apply once... feature of GPPs. I thought that finally there was a way we could “suggest” settings to new users, but allow them to change it later on, for good. This, I thought, was a good alternative to modifying Default Profiles.

Doubtless there are situations where this feature will work perfectly, but if we want to avoid having to worry about resetting “RunOnce” registry flags, we will have to go back to the tried and true methods.

Microsoft doesn’t want us overwriting Default Profiles (although, there are ways to do it), but we can still modify it. A recap on how to do this:

a.       On the computer with the Default Profile to modify, make sure we can see hidden and system files, as well as their file extensions. Modify Folder Options accordingly.

b.       Run “regedit.exe” and select “HKEY_USERS” in the left pane.

c.       Click File, Load Hive, and browse to “c:\users\default” and double-click on the “ntuser.dat” file. (This is for Windows Vista/7/2008. For Windows 2000/XP/2003, the default folder will be “c:\Documents and Settings\Default User”.) Give it any Key Name at all, so long as you remember what you named it. I’m naming mine “DefUser”. There will now be an “HKEY_USERS\DefUser” key. Expand this, and the keys under there will match those found under HKEY_CURRENT_USER (or HKCU).
d.       You can directly modify registry values in there, or if there are a lot of changes as you have set them in your current profile, you can export keys (not the whole hive) into a registry file, replace all text strings “HKEY_CURRENT_USER”  with “HKEY_USERS\DefUser”, and import it again.

e.      Select “HKEY_USERS\DefUser” in the left pane and click File, Unload Hive, to save the values to the Default Profile.

Alternatively, you can implement login scripts that make use of flag files. In the example below, H: drive is the user’s network Home Directory and G: drive is a common location.

@echo off
ifmember.exe Background
if not errorlevel 1 goto Skip
if exist h:\flags\Background.flag goto skip
regedit.exe /s g:\share\configs\Background.reg
echo Background flag file created >> h:\flags\Background.flag
echo From %computername% - %date% %time% >> h:\flags\Background.flag
:Skip

The difference in logic here is that the flag does not get created if the targeting criteria is false, unlike the GPP Apply once... feature. 

Summary
Group Policy Preferences are a nice tool to have, and will save many administrators having to write login scripts or make custom Group Policy Object Administrative Templates to accomplish the same thing. The separate GUIs shown for each type of preference is very handy and saves us having to figure out registry keys, and the item-level targeting will prove to be an extremely useful tool for most of us.

There are four possible actions for a GPP to take and five additional options that can be selected, though not all are available for all types of settings. There are two options in particular that need special attention.

The Remove this item when it is no longer applied feature should be used with care because if that GPP becomes out of scope of the client, for any reason including network disconnection, the values associated with it will not revert back to how they were before, but they will be deleted completely, leaving the user receiving new settings from the Default Profile.

The Apply Once and do not reapply feature displays some behaviour that might be unexpected. If it is used in conjunction with Item-level targeting and the intended recipients do not yet fit that criteria (for example, they have not yet been added to a specified group), they will not get settings from that GPP even once. It will run, filter criteria and not apply values, but still place a “RunOnce” registry flag in the user’s profile or in the computer registry hive.

If you copy a GPP with this option, it retains the original ID and thus users who have already run the original GPP will have their registry flag set to also exclude this new copy. There are work-arounds to all of these behaviours and issues, but they important enough to remember because they could have serious ramifications if the values in question are there to configure critical settings in a business application.

Wednesday, 3 February 2010

Setting up a London Office

Overview 
No matter how small a new office will be, whether for a new company or a branch office, someone in Information Technology (IT) should be involved in the planning process of setting up that office from the beginning. If there is no project plan beforehand, then IT should be involved from the moment the lease is signed, or sooner, in order to get that new office up and running as soon as possible.

There are many items for IT to cover, but the main reason for this early involvement is the project critical-path item, communication lines (your connection to the Internet and to e‑mail). This is especially important in the United Kingdom where it can take up to two months for a line to be put in.

There are options for smaller offices that may involve temporary residence in managed office premises and/or temporary “smaller” Internet lines. With this scenario in the past, we have been able to set up a four-person office ready for users within seven business days.

This article will discuss IT options and requirements to consider when planning to set up a new office in London. The word “we” will usually refer to those in IT that would be performing much of the work described, and not necessarily the author of the document.

Managed Office Space
Smaller companies can select from many managed office premises available with little lead time required, sometimes as little as a few days. These will usually include managed Internet communications, furniture, use of a phone system, and a shared receptionist. Some may include managed computer workstations. In this case, there would be no requirement at all to engage or hire an IT person.

If the plan is to eventually move to more permanent premises, however, then IT should be involved even at this phase of setting up. IT migrations are always significantly more difficult and expensive than building new infrastructure. Therefore it may make sense to initially set up new company-owned or leased workstations so that when it comes time to move, they are simply transported to the new office. User accounts, settings, and data do not need to be migrated to new machines.

No matter how small a business, we always recommended that a server is used to centrally store and back up data. Data should never be permanently stored on workstations. This may be managed server space at the managed premises, but again it would make sense to set up an independent one if the plan is to move later.

You don’t usually need to worry about the network in these premises, but occasionally technical requirements may conflict with the premises’ policies. For example, your network may need specific access to an Internet resource that is not allowed through the premises’ network firewalls. In this case, we can request an “open” unmanaged Internet connection from the premises and install your own firewall and manage Internet connectivity independently. This would also become another component that is easy to migrate to a new office.

Communication Lines in London
There are many Internet Service Providers (ISPs) and managed Wide Area Network (WAN) carriers, but at the end of the line, there is always the final connection to the building from their network to your network. This is called the “local loop”. In London, this generally means either British Telecom (BT) or COLT Telecom Group. These are the two main companies that will either already have or lay new cabling under the street into the building. You often wouldn’t deal directly with these companies, as we would choose carriers or ISPs that also manage the local loop. In spite of that, it is beneficial to inquire which local loop will be used.

We will come right out and say that we prefer COLT for several reasons. This is an un-biased preference that is not based on an agency or any other business relationship with COLT. This is also the preference of many ISPs and communications carriers.

  1. We usually find that COLT can deliver faster than BT. Usually COLT can have the local loop set up within 30 business days, whereas BT often takes 45 or more.
  2. We feel that COLT are more flexible to deal with. In spite of having the local loop managed by ISPs or carriers, we still have to deal with it when it gets installed to the computer room. For example, if we are getting three circuits installed, all using COLT as the local loop, we can speak to COLT easily and ask that they combine work for the three circuits in terms of equipment (less communications rack space required) and site visits. BT would typically treat this as three separate jobs, install three sets of equipment, and come three times on site. We would find it extremely difficult to get to the right person to facilitate this.
  3. We have found it easier to deal with COLT for any changes required after implementation.
In spite of this, there may be reasons to use BT. COLT may not have cable laid in the neighbourhood of your new office. If you have redundant Internet or WAN circuits, we may choose also to have redundant local loops. We have also come across the situation where the landlord of the building didn’t allow COLT in with a new physical line, but would allow BT in because they already had infrastructure inside the building.

Which brings us to another item that slows this down: wayleave agreements. This is a legal document where the owner or landlord of the building, and also sometimes the occupant of the premises, gives the local loop provider permission to lay cable into the building. The provider will require this document and usually the occupier must pay the legal fees for the landlord to fill it in. The document will include drawings and detailed plans showing the exact cabling route into the building and then to your premises; it may define risks and mitigation plans; and may include a degree of risk acceptance by the carrier. This is a critical-path project item.

In central London, there is also a wireless ISP that may be sufficient as a backup link.

Smaller companies can set up an interim BT DSL phone line (or multiple lines) for Internet services while the main lines are ordered. These are the same type of lines that would be used for home Internet. As with the main lines, there are many ISPs offering DSL Internet, but they all run over a BT phone line, which is a requirement. (There is one carrier, Virgin, that has their own cables to many premises, but they do not yet offer business level support.) This can shorten the lead time for initial Internet and e‑mail connectivity to two or three weeks. Understand that performance will be slower than dedicated business-class lines. Once the permanent lines are in, there is the option to use the DSL link as a backup.

First Step: Analysing, Defining, and Planning IT Requirements
As with all projects, setting up IT for a new office will run smoother, deliver what’s required, and be completed on time and within budget if it is properly planned. For this reason, it is always cheaper in the long run to hire an IT person or engage an IT consultant early in the process. This is a large topic on its own, so we won’t dwell on it here. Suffice it to say that if the requirements are properly defined by management and the IT resource, then we know what to order and build, deliver the results correctly implemented the first time, and have happy users from Day One.

Communications Room, Cabling, and Roof Items
For very small companies with no dedicated communications room (or comms room for short) available, most components in this section may be able to be fit into a special sound-proofed and cooled server rack that almost looks like regular furniture.

Mid-sized or larger companies moving into their own unmanaged premises and having the opportunity to lay out a new floor plan will need to pay special attention to the future comms room or rooms. It or they will have special electrical, air conditioning, fire protection, environmental, and security requirements. Even small comms rooms not much larger than a closet will have most of these requirements.

If there is no internal network cabling or if new interior partitions are being built, then we will need to or want to install new internal network cable runs between the comms room(s) and all points where there will be computers or printers. Wireless Access Points (WAPs) may also need to be installed above suspended ceilings.

For these reasons, we find it hugely beneficial to be included in early meetings with the architects. We make sure that all of these items are considered in the plans early, thus saving the business countless hassles and expenses later on.

Sometimes the architect needs to get government planning permission for items going on the roof, which can include air conditioning equipment, wireless communications antennas, or a television satellite dish. Permission is more likely required when located in the City of Westminster borough of London.

Procuring and Building the IT Infrastructure
While the communication lines are being processed and the server room is getting built, we can procure the network, telephone, and computer equipment, as well as arrange for outside services such as off-site backups and e‑mail anti-SPAM filtering. Documentation can also be started.

After the server room is built, we can begin work installing the servers and networking equipment as well as begin configuring the phone system.

Once the communication lines are in place and the server room is completed, then we can complete building and configuring the computers, telephone system, and other services. This includes servers, workstations, networking, printers, photocopier/scanners, video conferencing and audio-visual equipment.

Hand-Over
When all the technical work is complete, it should be documented completely such that there is no continued reliance on one specific person or IT firm. Let the decision to continue with an IT supplier not be forced. Procedures such as backups need to be implemented and documented. Time needs to be spent with key users placing and setting up initial corporate data and there will be the initial period where more support is needed by the end users.

Time Line
The simplified Gantt chart below shows a sample project for simple set-up for around fifteen users. Note how many tasks are dependent upon the communication lines and the comms room. If these are not handled correctly early in the project, they have the potential to delay it by months. It is best to involve IT early to help avoid these pitfalls.

Wednesday, 27 January 2010

Applications on Citrix - Architecture



Overview
Even after all these years, there is still a lot of misconception out there about what exactly “Citrix” application publishing is. There are still CTOs and Infrastructure Managers who shy away from it after negative past experience or still don’t know precisely what it is; there are still many bad installations out there that function at a fraction of their potential; and there is still end-user resistance also caused by historical substandard encounters with the product.

Citrix, however, is just one very small piece of the over-all solution that provides end-users with published applications. IT people tend to label the whole configuration as “Citrix”, and the end-user community has now picked up this term. Application publishing may also be known as thin-client or server-based computing.

This article will delve into the most common areas that can make or break a successful application publishing solution: the applications themselves, the Windows environment, and the network. It targets those either who are maintaining an existing solution or who are considering implementing one.

The benefits of application publishing have been discussed enough so that they are now common knowledge in the IT industry: limited instance application management rather than on every workstation, reduction in the cost of application management, access from any location or device, additional and more granular application security.

In terms of their application publishing software, Citrix produces a product called XenApp, previously called Presentation Server, before that MetaFrame, WinFrame, WinView, and starting life as Multi-User. In its current iteration, it runs on a Microsoft Windows server and enhances its Terminal Services remote-control multi-user computing. It also provides tools to easily manage published applications (essentially enhanced shortcuts) and most aspects of the XenApp servers, and also to monitor alerts and provide reports. And that’s it. The tools are very useful and in my opinion, if you want to provide application publishing it is better to have Citrix than not to have it, but that is all it does.

The diagram below shows components that may be included in an application publishing solution. Note that of all the 32 components shown, Citrix software is responsible only for three (shown in red), and optionally another remote access component by Citrix, Access Gateway.


32 components that can make up an application publishing solution

Therefore, if there is a problem with published applications, odds are, it is not a problem with the 3 out of 32 components.

I used to say that Citrix is not an application where you just run “setup.exe” and you’re done. However, these days, in terms of pure Citrix, it is almost that simple. It is the design, configuration and tweaking of many of the other 29 components that take the time and skill to get the solution right.

For the remainder of this document, I will refer to a server hosting the applications and running Windows Terminal Services (and perhaps Citrix XenApp) as an “application publishing server”. When discussing technology specific to Windows Servers, Terminal Services, I will use the term "Terminal Server".

The Application
The first thing to realise is that not all applications will work well on a multi-user remote-controlled computer, an application publishing server. Some that spring to mind instantly are those that rely heavily on graphics or processing power, such as Computer-aided Design (CAD) or some (not all) insurance modelling programs. It is possible to publish and run these types of programs, but it will come down to end-user acceptance as to how “usable” the programs are in this environment, balanced with how cost effective it is. It may be that IT prefers to run all applications on Citrix, even if that means allowing only one instance of each per server. IT may calculate that the operational savings outweigh the added hardware expense for such an installation. Other applications are simply too old or badly written to function at all on a multi-user computer. Only thorough testing and User Acceptance Testing (UAT) will truly determine usability.

The majority of applications out there will, however, work quite well in a published environment, even most that are not officially supported by the vendor to “run on Citrix”. The key to this is to know the applications well, and to analyse them as you install, even supposedly “simple” ones such as WinZip or Adobe Reader. For more complex programs, it is always worthwhile having an application vendor specialist on hand during installation working side by side with the Citrix specialist.

An application must pass up to five technical criteria to work well in a multi-user remote-control computing environment: multi-user, multi-computer, multi-location, performance, and integration with existing Windows profiles.

Multi-User
An example of an issue that may occur when multiple users access the same application on the same computer, in this case, an application publishing server:

A fictitious application is multilingual, but it stores the user language settings in a “c:\windows\language.ini” file.


User1 logs on for the first time, sets her preferred language to English, uses the application, and then logs off. User1 logs on again; her English preference has been set as desired, and so she simply uses the application and later logs off. User2 now logs on for the first time and sets his language to German. This changes the “ini” file on C: drive of the server.


User1 logs back on. Whether or not User2 is still logged in, User1 will now see the application in German. She will have to figure out how to change to the language back to English again, via German menus; or more likely call the IT Help Desk. The same will occur for User2 when he logs on to English settings again.

This is obviously a simplistic scenario, as most applications these days would store such a setting in the HKCU registry hive, but it does display the types of issues to look out for. It also crystallises how the installer must know the application well when installing to Citrix. Maybe there is another “*.ini” file or registry setting that points to the location of the “language.ini” file, in which case it could be modified to point to each user’s individual copy of the “language.ini” file on their network Home Drive (and then a script would be required to ensure that each user has a copy of said file upon logon). Alternatively, maybe the location is hard-coded into the program and there is no way around this issue.

Note that this scenario would also occur on regular workstations if users swap desks. Many multi-user considerations are identical to those that should be analysed for imaged desktop rollouts or application streaming.

Issues include simple (i.e., colour selections) or critical (as in the example above) preferences, or basic application settings, such as pointers to data sources. Such pointers could be set on a per-user basis, but not in a multi-user capable manner. Obviously, such problems would lead instantly to data corruption. “Lock” files could be located on C: drive of the server, making the application unusable for any user after the first.

Working around these issues is where the skill of the installer comes in. Hopefully, the registry can store all such settings, in which case custom Active Directory (AD) Group Policy Object (GPO) Administrative Templates can be built, or Windows 2008 Group Policy Preferences (GPP) can be implemented, to ensure that any registry changes are centrally managed and controlled. Perhaps a logon script is required to manipulate file locations, names, or contents.

Many preferences don’t need to be controlled, but merely set into the Windows Default Profile so that each user doesn’t have to go through application “mini-setups” the first time they use it. For example, the clicking of Adobe Reader’s license acceptance, or telling WinZip not to provide hints every time. I will discuss the Default Profile later.

Multi-Computer
Multi-computer means multi-application-publishing-server in this context. Even if there will only be one such server available, these considerations will hold true when it comes time down the road to add another server or to migrate to a newer one. An example of an issue:

Citrix load balances applications according to, mainly, server capacity in terms of CPU usage and RAM. A user doesn’t necessarily use the same server in the farm each time.


User1 logs into a complex application where a user typically sets up many preferences before they begin using it. This can take up to five minutes, but then the settings should be set the next time they use the program.


This Citrix farm has 10 load-balanced servers. It is possible that User1 gets logged onto each one in the next 10 or 20 days, thus having to go through that preferences setup nine more times. The second or third time would generate a call to the Help Desk.

If such settings are stored in the user’s Windows profile, then the issue may be overcome using Roaming or Terminal Services Profiles, or redirected profile components. I will discuss these topics later in the Windows Environment section of this document. Perhaps the application handles it on its own with “*.ini” or small database files. Again, thorough analysis is required and, again, maybe it can be fixed with custom GPOs and/or scripts.

These types of issues and fixes would also absolutely occur in standard workstation builds with locally installed applications, where users may swap desks.

This is also where strong change control and standardisation is absolutely necessary for the success of the application publishing servers. All servers that are publishing the same applications must have those applications installed in exactly the same way and be at precisely the same version of Windows and application. A user may have a link to a file deep within their roaming profile (or otherwise centrally located settings) that points to a file on the server’s C: drive. If the servers are not the same and that file is not where expected, the application will fault and a Help Desk call is generated.

Ideally, the servers should be grouped by function and the entire build of all the servers in that function are identical, not just that specific application. One term for this is application silos. For example, Servers 1 through 4 publish Applications A through C, Servers 5 though 8 publish Application D and E, and Servers 9 and 10 publish “weird app.” F. All servers within those groups should be identical, to the extent that cloning them is a good idea (albeit with many Citrix-specific considerations to include when cloning such servers).


One Citrix farm with three application silos - three sets of identical servers

Multi-Location
A very simple and common example:

Application1 works fine for all users at Location1, but is horribly slow for users from all other locations from across a Wide Area Network (WAN).

Usually this issue would occur no matter whether the application is published from application publishing servers or locally installed at the workstations. If the application is only accessible from Citrix, then this is a perfect example how such a problem would be lumped in as a “Citrix problem” from the users’ perspective.

The obvious and usual issue is that the back-end data, whether files or an SQL database, is also at Location1. Users running the application from application publishing servers (or workstations) from other sites will experience delays. The easiest solution to this is to ensure that users from all locations use application publishing servers only at Location1. Otherwise, back-end data replication across sites will need to be analysed.

It’s more complicated if all users are using the application publishing servers at Location1 with no problems, but still the “remote” users only are experiencing slowness. This type of problem is usually related to those users’ Windows environment where perhaps their Home Drive is configured at a site other than Location1, or perhaps AD GPOs are redirecting portions of their Windows profiles to a server outside of Location1.

If the problem only occurs during logon or program launch, it is also likely a Windows environment issue pertaining to GPOs or login scripts referring to servers other than at Location1.

These types of issues will be discussed in detail later in the Windows Environment section of this document.

Performance
No example needed here, applications might appear to be slower when published from Citrix than when locally installed from workstations. If the workstations are on the same high-speed network segment as the back-end data and if the workstations are adequately powered, then the applications will in fact probably be faster from the workstations. Many factors can influence performance, so we will only scratch the surface in this section.

If the speed differences are almost negligible and it is simply down to users’ expectations, then there is often some tweaking that can be done to enhance speed in small increments. For example, reducing unnecessary colour depth, eliminating Windows “3‑D” graphical enhancements and unnecessary operating system (OS) services and processes, and removing certain unneeded application functions may be enough to bring the speed up enough for expectations. Small examples include removing any auto-update feature from application launch, Internet lookup features (for example, MS Office online Help), and disabling animation features (especially MS Office Assistant). These fixes are usually controlled through application vendor-provided GPO Administrative Templates (such as those that come with MS Office), with custom created ones, or with GPP.

Other performance issues were discussed in the previous section, Multi-Location.

Other issues could actually be down to the network. Network latency is the real thin-client killer, which will be discussed in detail in the Network section later in this document.

Co-existence with Existing Windows Profiles
This section pertains to environments with multiple Citrix farms, shared roaming profiles between application publishing servers and workstations, or redirected Windows profile portions used both when logging onto the servers or a workstation. This is especially relevant if the application in question is installed on the workstation as well as the Citrix farm, perhaps for remote access or Disaster Recovery (DR) use only.

Examples of problems that could arise with these configurations include those described in the Multi-Computer section and if it is an undisciplined IT environment, general application failures could occur.

It is possible to run published application servers with any of these configurations if due care is taken during installation and IT operations runs a tight ship. This will be discussed more in the Windows Environment section of this document.

Windows Environment
After application configuration, tweaking, and tuning, the next crucial area that most determines application publishing failure or success is the Windows environment. This includes Home Drive or Folder location; printing configuration; the location of “My Documents”; network drive mappings; and AD GPOs or Novell’s NDS ZENworks (referred to as only AD GPOs for the remainder of this document) components which include roaming or Terminal Services profile configuration and location, redirected Windows profile components, perhaps the use of a mandatory profile, and log on/off scripts.

In my experience, I find that after badly installed applications, badly configured Windows profiles are the most common source of “Citrix” malfunctions. It should also be noted that many of the Windows environment issues described here also apply to workstation Roaming Profiles or, more generically, workplaces where "hot desking" occurs.

Windows Profiles
Technologies that can be used to create or maintain Windows profiles include Roaming, Terminal Services, Mandatory, and local or central Default Profiles; redirected profile components and also hybrid profiles.

If more than one application publishing server is going to be used, then at least one of the technologies above must be used to allow smooth transition for a user to work first on one server and then another. It's a good idea even if only one server will be used, in order to allow for easier future server additions or replacements.

Local Default Profile............ Users logging onto a server for the first time will receive a mirror image of this default profile, located on the server's C: drive. In operating systems older than Windows Vista/7/2008, tweaking this profile was a good way to provide settings to users that they are later allowed to change.
Often, however, this profile is over-used as a mechanism to deliver custom settings to users and often profiles become corrupted or almost unusable. Furthermore, Microsoft is now frowning on the practice of over-writing the Default Profile - see this link.
(Windows Server 2008 now has the ability to use Group Policy Preferences (GPP, even if the AD infrastructure is still at the 2003 level), which allows distributed settings that users can then change temporarily or permanently.)
Central Default Profile.......... This works the same as a local Default Profile, except that it is located centrally at "\\AD_Domain_Name\NetLogon\Default User.v2". Windows knows to look there first if a new user has no existing local, roaming, or Terminal Services profile (and Windows Vista/7/2008 knows to look for the path with ".v2" at the end).
This has the advantage that only one Default Profile needs to be created and managed. The disadvantage is that there is only one Default Profile now available, which may not be applicable for machines of a different build or set of applications.
Mandatory Profile................. This is a centrally stored preconfigured profile that users must use and any changes made to it are not saved. It is only realistically possible to use such profiles for kiosk or guest computers. Business users need to be able to customise their applications to a degree, and these customisations are usually stored in their Windows profiles.
Roaming Profiles................... Roaming Profiles have been around since the Windows NT and even '95 days, and they have always received mixed reviews. Like application publishing, it is a technology that works really well when done right, and can go horribly wrong if not.
a.       A user logs onto a workstation or Terminal Server for the first time. He has never logged onto the domain before. He gets his new local Windows profile built from either a local or centrally located Default Profile (Mandatory Profile not used).
b.      Any tweaks made to the profile are stored in the local profile - in this example, setting the Desktop background to purple.
c.       He logs off. His Windows profile gets copied up to a network location as defined either in his AD user account or in a GPO.
d.      He logs on again to a different workstation or server. Unless there are other GPOs preventing the use of Roaming Profiles on that computer, his profile is copied down from the central location and loaded before presenting the Windows Desktop to the user. His Desktop background is magically purple.
When implemented correctly, this works really well. It works so well that, contrary to popular belief, the same profile can even be used between workstations and Terminal Servers, without the need for separate Terminal Services profiles (discussed below). This shared profile scenario will only work if the following conditions are true:
·         Terminal Servers are being used for full "thin clients" - that is, the client is only a stripped-down DOS, Linux, or hardware device that has the sole purpose of connecting to a Terminal Server or Citrix farm and running applications or full Desktops. The local client must not also be using the same roaming profile. (Otherwise the central profile will always be updated when the client logs off, over-writing any changes made to the profile during the remote session.)
·         The Terminal Servers and workstations are built in the same manner: the same base applications are installed in precisely the same way and using the same paths (thus, if a Citrix server is involved, it should also use C: drive as its system drive, or whatever drive configuration the workstations also use). It is all right if the workstations have more applications than the Terminal Servers, or vice versa, so long as those programs that are installed on both platforms are installed exactly the same.
·         There are tight change control procedures in the IT department. On-the-fly changes will quickly corrupt this configuration.
·         The workstation and Terminal Server operating systems are at the same basic level (NT4 Workstation and NT4 Server, Windows 2000, Windows XP and Server 2003, Windows 7 and Server 2008). These pairs of operating systems have the same folder structure (especially in user profiles), understand the same set of GPOs, and use the same technology.
The benefits to the scenario above are obvious: users experience exactly the same front-end experience no matter to which platform they log on. A user may not even realise that their old workstation was "upgraded" to be a Linux thin Citrix client over the weekend (except for the improved performance).
Problems with Roaming Profiles in general begin to occur when users travel from site to site making loading and unloading of profiles obviously very slow across a WAN, or if files are stored in the default locations of "My Documents" or the Desktop (both within the profile), making loading slow because of the excessive size of the profile. Redirection of profile components, discussed below, are required to ease both issues, and the implementation of what I call "semi-roaming profiles" can be set up with one profile per user per site using GPOs to define locations.
Other historical problems are usually caused by incorrect configurations or permissions on the profile folders.
Terminal Services Profiles..... Where it is not possible to share user profiles between workstations and Terminal Servers, which is usually the case, separate roaming profiles can be defined for users of Terminal Servers. They are used exactly the same way as Roaming Profiles except only when logging onto a Terminal Server. It is possible to use Terminal Services profiles only, or both Terminal Services and Roaming Profiles; but if Roaming Profiles are used without Terminal Services Profiles, then those Roaming Profiles will also be used on the Terminal Servers unless otherwise limited by GPO. When logging onto a Terminal Server, it will search for a profile in the following order:
a.       Terminal Services Profile path specified in GPO
b.      Terminal Services Profile path specified in the user object
c.       Per-computer Roaming Profile path specified in GPO
d.      Per-user Roaming Profile path specified in the user object
e.       Local Profile
f.       "\\AD_Domain_Name\NetLogon\Default User(.v2)"
g.      "C:\Users\Default" (for Windows Vista/7/2008)
Component Redirection........ A form of redirection of Windows components has been around since 16-bit Windows "*.ini" files. We could always edit those "*.ini" files, and later the registry, to point components of Windows to somewhere other than C: drive. For example, program groups and icons in Windows 3.1 could be redirected to a shared read-only network location as could the Windows 95 Start Menu and Desktop.
The GUI for this finally came out in Windows 2000 GPOs, where we could redirect the Start Menu, Desktop, My Documents, and Application Data. Now for Windows 7/2008, we can redirect:
·         AppData(Roaming)
·         Desktop
·         Start Menu
·         Documents
·         Pictures
·         Music
·         Videos
·         Favourites
·         Contacts
·         Downloads
·         Links
·         Searches
·         Saved Games
Other folders, such as Cookies, can also be redirected using older methods such as custom administrative templates ("*.adm/x" files) or, better yet, in Windows Vista/7/2008 Group Policy Preferences (GPP) under Windows Settings, Registry.
Component redirection is almost a required technology to enable complete or semi-seamless roaming of users between workstations or application publishing servers, whether or not Roaming or Terminal Services profiles are also used.
·         Rather than training users never to save files to "My Documents", redirect this to their network Home Drive. This not only keeps data off the un-backed-up workstation, but also reduces Roaming or Terminal Services Profile loading and unloading times (log on and off times).
·         Rather than training users never to save files to the Windows Desktop, redirect this to a folder on their network Home Drive for the same reasons mentioned above.
·         Redirecting Internet Explorer Favourites to a folder under their Home Drive makes these links available wherever they log on, Roaming Profile or not.
·         Likewise, many (but not all) application preferences will also be readily available if the "Application Data" or "AppData" folder is similarly redirected.
Hybrid Profiles...................... These profiles use a combination of a Mandatory Profile and a third-party application to store certain saved settings. Products include TriCerat Simplify Profiles, Mancorp Managed Profiles, and Jumping Profiles, among others.
Self-Healing Profiles............. I first heard the term "self-healing" in one of Microsoft's earlier marketing blurbs, possibly under their "Intellimirror" umbrella. I use it to describe my own hybrid profile solution I often implement in demanding environments or for demanding applications.
Basically I use a combination of profile component redirection, a second hidden home drive for each user, and some complex login scripts. These scripts may match up with corresponding flag files in a hidden home folder, "\flags". A particular script will only run if its matching flag file does not exist. For example, a particular application may require a large set of files in the user's profile. Rather than placing them in a default profile, the script will place them there, preferably under a redirected portion of the profile or better yet, to a configurable location directly under the home or hidden home drive. If this part of the application gets broken, an IT technician can either remove the flag file or direct the user to do so if providing remote telephone support only. They log on again, the script re-runs, and that part, at least, is fixed.
A lot of this can go away now with the availability of Group Policy Preferences (GPP) for Windows Vista/7/2008.
So which technology should be used? I'll provide my own preferences for examples at each extreme possible and common scenario, both using Windows Server 2008 Terminal Services and possibly Citrix XenApp Server, and picking drive letters out of the air.
Items common to both scenarios:
·         each user has an H: home drive
·         each user also has an I: home drive, hidden but accessible (described below)
·         profile component redirection for at least "My Documents", Desktop, and Favourites
·         no customisations to the Default Profile, if possible
Scenario 1: brand new application publishing server cluster or farm, strong IT change control
·         GPO loopback processing, Merge mode (described later)
·         Terminal Services Profiles implemented and controlled by GPO
Scenario 2: other server farms to contend with, weak IT change control, multiple groups of IT administrators
·         GPO loopback processing, Replace mode (described later)
·         no Roaming or Terminal Service Profiles
·         "self-healing" scripts in place to set up preferences or application requirements
·         extensive use of GPP


Home Drive/Folder Location
Poor configuration here can manifest itself in several ways. The most common is described in the Multi-Location section above where remote users sharing the same application publishing servers experience performance issues while local users do not. This is likely because “local” users are those that have their Home Drive/Folder configured on a file server at the same site as the application publishing servers and “remote” users have theirs on a file server across the WAN from the farm.

A common tweak to make an application multi-user capable is to place its components or configuration files (or possibly even application cache files) on the user’s home drive. If this location is across the WAN from the application executable, performance will be slow.


Another problem can occur if users regularly access their Home Drive/Folder through Windows Explorer rather than just using “My Documents” (which might point directly to “h:\files” as shown to the right). Users must have at least Read, Write, and Delete rights to the configuration folder described in the last paragraph. If they see the folder and do not know its intention, they may simply delete it, thus negating the benefit of any multi-user tweaks and probably causing the application not to launch at all.

A good way around both of these problems is to create two user home drives: one visible that they use and can browse to and one hidden used for configuration details only. Ensure that at least the hidden home drive is on a file server local to the application publishing servers.

“My Documents” Location
Similar to Home Drive issues described above, it can be a problem if the user’s “My Documents” folder is redirected to a network location on another WAN site than the Citrix farm. Many applications, especially MS Office, default to opening or saving files to “My Documents”. This can make the application launch, opening and closing files go very slowly. Even if MS Office applications are told to store files at specific local locations via GPOs, this doesn’t actually work for all the applications or components of certain applications.

There is no perfect fix for this issue.

  • One way is to set up a “My Documents” folder for each user at their home site where their home file server is and also on a file server at the Citrix site, which of course may leave the user confused as to where they should be storing documents and may generate a Help Desk call.
  • Another option is to implement a form of file replication across the WAN and use a Distributed File System (DFS) Universal Naming Convention (UNC), a potentially expensive and complex project.
  • Leave “My Documents” pointing to the default of the local Windows profile and train users to never use this location, but to use Home or shared drives and folders instead – not an ideal solution to enforce.

Drive Mappings
File server or "network" drive mapping issues are somewhat related to logon scripts, where a user’s “home” logon script may map drives on their “home” file server, which may in fact be remote from the published application servers. This may make application performance appear slow, when it is simply trying to handle files across a WAN.

It is important to have drive mappings, if necessary, on a file server close to the Citrix farm.

Printer Connections
Unlike network drive mappings, it is usually the case where the user should receive the same list of network printers on the application publishing server as they would on their own workstation. This is the preferred method of printing: the print job gets sent from the session on the Terminal Server (Citrix) to the print server nearest the user’s client workstation, processed on that print server and sent to the nearest network printer to the user.

Citrix does have a feature in its Independent Computing Architecture (ICA) protocol where it is possible to print directly from the user’s Citrix session, back through the ICA connection to the user’s client workstation, where it sends the print job to any configured printer, local or network. This has the advantage such that users can use non-networked locally-attached printers for Citrix printing, and is also useful for printing when connecting via remote access, but it is a slightly less reliable method of printing and it does use more bandwidth in the ICA session. Where network printers are available, it is best to use regular Windows printing from the application publishing server.

Centrally Controlled Settings
Centrally Controlled Settings are Windows registry values and other components controlled by a central database of configurations. The most common tools for this are Novell's eDirectory (formerly NDS, Novell Directory Services) ZENworks or Microsoft's Active Directory GPOs. This document will concentrate on GPOs.

One aspect critical to application publishing servers is the loopback processing mode, a machine setting. In order to maintain control and integrity of an application publishing server, users are typically "locked down" more on the server compared to the easier settings they receive at their workstations. Consider that most settings that control what an end-user can see or control in regards to the Windows or application interface are set in the User portion of the registry, and thus the user portion of the GPOs, how do we set more restrictive policies for the same user when they log onto an application publishing server as opposed to their normal workstation? Loopback Group Policy processing is enabled. This setting is defined in (in Windows Vista/7/2008):

Computer Configuration, Policies, Administrative Templates, System, Group Policy, User Group Policy loopback processing mode

In the example to the right, it would be set in Machine GPO C, linked to the Terminal Servers OU.


Policies get processed from the "top of the tree" and then "down", so in the normal scenario of a user at a workstation, the following occurs:
  • The workstation boots up in the Computers OU and receives first Machine GPO A and then Machine GPO B.
  • The user logs on in the Users OU and first receives the User GPO A and then the User GPO B.

When logging onto the application publishing server, the following occurs:

  • The server boots up in the TS Servers OU and receives first Machine GPO A and then Machine GPO C.
  • The user logs on in the Users OU and first receives the User GPO A and then the User GPO B.

and then one of the following two scenarios occurs:

If the loopback policy is enabled in Replace mode, this tells the Group Policy client extensions to disregard all settings so far and to start again at the top of the tree, taking user settings, but this time following GPOs down the tree in the path of the computer, as opposed to the normal behaviour of following the path of the user.

  • In this case, the user would receive User GPO A and then User GPO C, and there would be no settings received from User GPO B.

Or if the loopback policy is enabled in Merge mode, this tells the Group Policy client extensions to keep all settings so far but to also start again at the top of the tree, taking user settings, but this time following GPOs down the tree in the path of the computer.

  • In this case, the user would receive User GPO A, then User GPO B, then User GPO A again, and then finally User GPO C. If there is a conflict between any of the policies, the usual rule applies where the last one wins, in this case, GPO C.
Network
Resource Placement
The most basic aspect of the network when designing the application publishing architecture is to keep the data close to the application publishing server, preferably via a 1 Gbps connection on the same network segment. The data in question includes any data used by the published applications themselves, such as an SQL database or Microsoft Exchange, shared files, files in Home drives, Roaming or Terminal Services Profiles, and any profile components or configuration settings redirected to hidden home drives.

Latency
The most important network aspect of application publishing server access from across a WAN or the Internet is network latency - not a lack of bandwidth, although a lack of bandwidth can also cause higher latency. Latency is the amount of time it takes for a packet to make a round trip between the server and the client. The easiest way to measure this is with the PING command and reading the “time=” column of the results. This is down to end-user acceptance, but in my experience Citrix performs well up to 250 ms, perhaps 300. End-user symptoms of poor latency include slow typing (where the user is waiting for the screen to “catch up” with keys pressed), slow image re-draws, and trailing mouse cursor movements.

Remote control protocols are typically efficient because traffic from the client workstation to the server consists of only mouse clicks and keyboard strokes and traffic the other way is only compressed image deltas of what the user sees on screen. No actual data is sent back and forth. The bandwidth requirements for either Microsoft’s Remote Desktop Protocol (RDP) or Citrix’ ICA protocols are published as requiring in the neighbourhood of 20 to 30 Kbps per session, whereas real world application requirements might more accurately require upwards of 300 Kbps per session. This allows for deep colour at 1920x1200 resolution with perhaps printing back through the ICA tunnel.

Dealing with poor latency can be a nightmare. The first thing to check is the bandwidth capacity. If there is not enough bandwidth available exclusively for these ICA or RDP sessions, high latency will occur. If the total bandwidth of a site is sufficient, perhaps Quality of Service (QoS) can be implemented to allocate a portion of the bandwidth exclusively for remote computing protocols. If both the server and client ends have ample bandwidth, then latency can be caused by crossing too many routers in between, usually meaning too many different communications carriers in between. The only fix is to deal with and perhaps change carriers, looking closely at latency SLAs. Some countries simply don't have the communication backbone to provide low latency. I have seen this with connections from Central America and specifically from the off-shore tax haven Labuan, a Malaysian island north of Borneo.

IT Operations
It won't be popular, but as an external consultant, I have to say that a common way I've seen a good application publishing environment destroyed is by the IT department themselves. Changes and "updates" are made on the fly with no change control or documentation, and only to the "problem server" and not to the whole farm; issues are fixed with work-around solutions rather than fixing the root cause; updates or bona-fide fixes are not technically tested nor tested for user acceptance; and other generally sloppy procedures can quickly wreck the product.

These practices will of course be detrimental to any IT infrastructure, but because application publishing is a little more demanding or fragile, it generally breaks first, thus giving "Citrix" its sometimes negative reputation.

Summary
The rest of the components shown in the diagram at the beginning of this document are fairly straight-forward to configure, and they are usually right or wrong with few "grey areas" in between. They were included simply to show all the areas that can break an application publishing solution. The most common problem areas are also those most complex to get right: application installation and configuration, and Windows profiles. Next in line is the Windows environment and network placement of resources, followed by IT departmental procedures. Many things must be absolutely correct to make the solution work, and it's usually not the Citrix product that breaks it.