Thursday 20 September 2012

False Report of a Virus with Sophos AntiVirus


Last night GMT, Wednesday, Sophos AntiVirus released a pattern file which has caused the program to report itself as a virus, and also some other anti-spyware programs (in my case, Ad-Aware) which update themselves. If you use Sophos and see notices about a virus called “ssh/updater-B”, please do not take any action. It is not a real virus.

Going by forum reports on the Internet last night, it looks like this has affected tens of thousands of computers. After a few hours, Sophos finally publicly admitted there was a problem and posted an article and a fix here.

Background: pattern files get downloaded by the anti-virus program from their manufacturer to get the very latest list of known viruses out there, and instructions on how to find and quarantine them. Sophos' default update interval is every 10 minutes. The problem here is that one of the files it deemed to be a virus was the program component that actually does the updating, so there is a bit of a process to get around this. I suspect there will be a few physical visits to computers today.

I happened to be on-line last night and, of course, our systems are managed by Sophos AntiVirus. I saw it happen. All of a sudden I got notices from the program informing me of viruses. I could see that the files in question were Sophos files themselves and for a while I was really impressed that the virus detected the anti-virus program and turned it itself into a virus. I went ahead and deleted the files before I had gone on-line to research it. In the end I had to manually un-install Sophos (deleting files and doing registry sweeps) and then install it again manually. By this time the bad update had been removed, replaced by one to fix it.

It could be worse. I believe that every vendor out there has had something like this happen to them. I remember in 1999 - Norton AntiVirus' central console (newly installed by me at a client site) distributed an update which promptly caused all the Windows NT computers to crash with a "blue screen of death" (BSOD). Still, I would have thought that software production and testing processes would have improved vastly in the 13 years that have passed.

Wednesday 15 August 2012

Official: I've been 20 years in IT!

Actually, it's 20 and a half years. I know plenty who have been in long before me, but I can claim to have been in IT when:
  • Internet and e-mail technology was there, but very rarely used by corporations, let alone individuals. I installed the first ever e-mail server at a North American-wide corporation, and that was for internal mail only. We first used modems (at 2,400 bps) to connect to Bulletin Board Systems (BBS) to download drivers and technical notes directly from manufacturers, or research in message boards. Our Internet at the time was dial-up only. The debate was whether CompuServe, AOL, or The Internet would be the way of the future, even before Bill Gates tried out the "Microsoft Network" to compete.

  • Internet resources were accessed at the time using Gopher or the Mosaic browser (later Netscape) and that really useful tool, Archie, that could search for specific files (I miss that). The search engines included AltaVista, WebCrawler, Lycos, and Yahoo. I remember a group of us huddled around after-hours at work as we downloaded our first porn image. It took about 20 minutes and then we had use some command-line decoding program to convert the hex gibberish to show the actual picture, line by line.

  • Network protocol and hardware debates: Ethernet vs. Token Ring (and I suffered many years with Token Ring); IPX/SPX vs. SNA vs. TCP/IP. I studied for almost a year and finally passed the Novell Networking Technologies exam, the old really hard one that asked you about bit positions and roles in a data packet and transmission methods of the different data-link protocols. "All People Seem To Need Data Processing"

  • I got in at 286 processors. Debates included IBM computers, or IBM-compatibles or "clones"; Micro Channel Architecture (MCA, with the blue-ended cards) or ISA (and later EISA). The IBMs were Model 50 and Model 55 (with its irritating case) at the time. Standard RAM was 1 mb, maybe 2 for special people. Our largest AS/400 midrange computer could hold just over 1 gb of data, and that with rows of disks that added up to the size of 3 full-size refrigerators. My latest memory card for my phone is about the size of my pinky fingernail and holds twice as much. Data was often transferred at the time via 5.25" floppy disks and later 3.5". It took 24 diskettes to install Microsoft Office 4.3.

  • My first operating system was DOS 3.3, but 5.0 was released soon after. One technician had Windows 3.0 installed, mainly for ooh-ahh than anything else. We made up batch-files called from "autoexec.bat" to show four or five main programs to call. "Press 1 for Lotus." The big challenge was the 640k conventional memory barrier. One of my greatest accomplishments later was to boot DOS 5 with "config.sys" configuring expanded (vs. extended) memory, connect to a NetWare 3.11 server, load IBM's "PC Support" DOS software to connect to the IBM AS/400 midrange computers, launch Windows 3.1, and then be able to run Lotus 123 from a DOS window.

  • Corporate PC software at the time was WordPerfect (which I still miss), Lotus 123, Harvard Graphics, and I was taking a few lessons in dBase3, a database/programming tool. We still used these programs for a while even after tentatively rolling out Windows 3.1.

  • We had 300 sites across North America, all connecting using X.25 technology at about 9,600 bps.

  • I installed our first PC based server: Novell Netware 3.11 running on a Pentium with 2 gb of RAM. We experimented with NMENU so users could use the cursor keys to navigate to their preferred programs.

  • We carried pagers.
Of course, I knew a geek at high school who got into it a decade before I did. I remember being bored out of my skull at his place when he showed me a computer who's "disk" was a cassette tape. I had no idea what he was doing or trying to accomplish. Millions out there got in way before me and have even more ancient technology to trot out, but this is where I entered the scene.

IBM Model 50

Monday 18 June 2012

Remote Desktop Gateway for Workstations

Introduction 
We recently built a Microsoft Remote Desktop Gateway, formerly known as Terminal Services Gateway, for a client. They’re a smaller office and not quite ready for a full-on Remote Desktop Server (Microsoft has replaced “Remote Desktop” for “Terminal” in many of these terms) or Citrix server. The task was to provide gateway services to Windows 7 workstations on the internal office network.


At first we tried also using Remote Desktop Web Access to present the remote connections to the end-users, but the best it offers is a field into which the user must type in the name of the desired internal workstation. That just wouldn’t have been seamless enough and would have undoubtedly lead to calls to the Help Desk. Therefore we discarded that idea and just created our own static web page that lists all the workstations available, complete with descriptions. This page gets rebuilt automatically by a script which extracts all such machines from Active Directory. The script is run as part of workstation build or decommissioning procedures in order to keep the list refreshed.

The gateway technology is a commercial strength remote access solution and is the same used for the full-blown Remote Desktop Server suite, however the presentation part of the solution shown here is more of a cheaper quick and dirty way to present the links to the end-users on a web page. The underlying technology is just as secure, but we are saving money by not having an actual Remote Desktop Server to which to connect, and only connecting to powered on workstations in the office. Further savings are realised because no Terminal Services Client Access Licenses (CALs) are required for this solution.

Environment
The client has about ten internal Windows 7 Professional laptops and desktops. There is a Windows Server 2008, R2 which is a DC and runs DHCP, DNS, file, print, anti-virus push, and WSUS. For this project we purchased a new server to act as Remote Desktop (RD) Gateway and to take on and replicate a few roles from the other server. It is an HP DL360 G7 single processor box with 8gb RAM and RAIDed disks. The load for these services is very small, so of course this could have easily been installed to a virtual machine, had that environment been in place. We installed the Windows Server 2008, R2 operating system.

Remote Desktop Services
We added the Remote Desktop Services role and only the Remote Desktop Gateway role service plus its minimum required features.
  • At the initial wizard we selected to choose the SSL certificate later and also to create the authorisation policies later.
  • We decided to install the Network Policy Server role service onto this server, and so let the wizard do so.
  • We let the wizard install the required Web Server role services, selecting those shown to the right. 






Digital Certificate
  • First in IIS Manager and selecting the server in the left pane, we selected Server Certificates in the centre pane and went through the create and complete certificate request procedures. We made sure to purchase an external common third-party certificate so that the associated root certificates would already be on most computers out there. The common name of the certificate must match the URL that users will be typing or directed to when connecting for remote access.
  • Then in RD Gateway Manager we right-clicked on the server object and selected Properties. In the SSL Certificate tab we clicked Import Certificate and selected the recently installed one above. 
CAP and RAP
A Remote Desktop Connection Authorisation Policy (RD CAP) determines which users can connect to the RD Gateway and a Resource Authorisation Policy (RD RAP) determines to which resources in the network they can connect.
  • Still in RD Gateway Manager, expand the server object, then Policies, and select Connection Authorisation Policies.
  • In our case, we created one CAP only, using the Windows Password authentication method and allowing only users that are members of the “RemoteGateway” user group we had created for this purpose. We disabled drives, ports, and plug and play device redirections. We set an idle timeout of 30 minutes and a session timeout of 240 minutes, after which sessions are disconnected. Obviously all these values are down to preferences.
  • Then in RD Gateway Manager select Resource Authorisation Policies.
  • We used the same user group we defined above and allowed users to connect to computers in the “RemoteDesktops” Active Directory (AD) group. We kept the default port of 3389 for allowed connections.
Landing Web Site
The Web Role and associated Role Services were already installed as prerequisites for the Remote Desktop Gateway role service. The digital certificate was also already added, above. Static pages were then created to welcome users and to list available internal workstations (available in AD, not necessarily connected and powered on).
  • A “/remote” virtual directory was added to the Default Web Site, with its physical path at “d:\ServerApps\RemoteDesktops”.
  • The remote virtual directory has the “.reg” extension added to MIME Types with type “application/octet-stream”. This is because there will be a registry file for client Windows computers to import, discussed later.
  • The directory also has Require SSL checked and is set to ignore client certificates. It has Windows Authentication (only) enabled with the “Negotiate” and “NTLM” providers listed in that order. Extended Protection is off and Kernel-mode authentication is checked.
  • A sub-folder. “/remote/Desktops”, was added. It also has its SSL and authentication settings set as above.
  • The remote/Desktops sub-folder has the “.rdp” extension added to MIME Types with type “application/x-rdp”. This is the page that will contain all the pre-created RDP (Remote Desktop Protocol) files that will connect to each internal workstation.
  • The sub-directory is also configured to enable its contents to be browsed from a web browser. Ensure that the resulting “web.config” file is itself hidden (via the file system) so that it does not appear in browse results.
To configure MIME Types, Directory Browsing, and SSL Settings in the folders above, launch IIS Manager and select the relevant folder in the left pane. The pertinent icons will appear in the centre pane.

Web Site Contents
The landing page has some instructions to end-users to run the one-time registry import if it’s the first time they are accessing this gateway from their current client computer, a link to registry import file, and a link to the remote/Desktops sub-folder. Here is the registry file:

Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\RDP.File]
"EditFlags"=dword:00010000
"BrowserFlags"=dword:00000008
[HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\PublisherBypassList]
"a long hexadecimal number"=dword:00000044

This sets up two things on the client Windows computer: it suppresses the prompt that would normally occur when downloading a “*.reg” file in Internet Explorer and it suppresses a second prompt to trust running the file from the file’s publisher. The RDP files will be digitally signed with the hash of the same certificate the web site uses, therefore “published” from that site. That hash number replaces “a long hexadecimal number” in the registry file above.

The remote/Desktops sub-folder simply contains all the RDP files, one pointing to each computer inside. The names of the file include the name of the workstation and also a description so that end users will be able to identify their computer. This web site sub-folder is configured to simply list these files as a directory listing rather than to display a web page.

The RDP Files
Remote Desktop files can be created in a text editor such as Notepad, or created using batch files or scripts. It’s best to start the base or template file by launching Remote Desktop Connection, clicking Options, and filling in all your preferred settings in all the GUI tabs. Important fields:
  • computer: the internal DNS name of the computer to which to connect
  • user name: as this is a remote access solution, it would be more secure to leave this field blank
  • local resources: as this is a remote access solution, it would be more secure not to allow drives and plug and play devices to be available in the remote session
  • display and local experience: the lower the resolution and experience, the lesser the latency, a personal choice
Then Save As the file. Now the file can be edited with a text editor. Some important fields to double-check below:

full address:s:internal DNS name, does not necessarily need to be fully qualified
authentication level:i:0
prompt for credentials:i:0
negotiate security layer:i:1
remoteapplicationmode:i:0
gatewayhostname:s:fully qualified domain name of the gateway, matching the certificate and URL
gatewayusagemethod:i:2
gatewaycredentialssource:i:0
gatewayprofileusagemethod:i:1
promptcredentialonce:i:1
connection type:i:2
redirectdirectx:i:1
use redirection server name:i:0
alternate full address:s:internal DNS name, does not necessarily need to be fully qualified

The next step is to digitally sign the RDP file. This is done by command line:

rdpsign.exe /sha1 “a long hexadecimal number without the quotes” /v NameOfRDPfile.rdp

The hash number is obtained from the “thumbprint” field of the digital certificate and also matches that in the registry file to import above. Once the file is signed, it can no longer be edited.

Managing the RDP Files
We created a script to manage these files. Our assumption is that all internal workstations be available for remote access via the gateway, but not the servers (we access these via SSL VPN instead). The script extracts the names and descriptions of all computers from the “Desktops” and “Laptops” AD Organisational Units (OUs). Each computer name plus its description becomes the name of the resulting RDP file. For each file the script echos out most values described above including the unique computer name, and digitally signs the file. It then adds the workstation to the “RemoteDesktops” AD group.

So our procedure when building a new workstation includes placing it in either of the OUs mentioned above, keeping the description field of the AD object of the computer short, simple, and understandable to end-users (for example, “Jessica - TP X220” - cannot include commas, etc, as it becomes part of a file name), and running the script. We also run the script after decommissioning a workstation and removing it from AD.

These scripts could probably be taken further. A script could probably be run when users connect to the web site (I’m not a very strong web person), which could then audit and refresh the list of workstations. They could also do a PING to each machine to ensure that it is online before adding it to the list.

Access from the Internet
The server running RD Gateway and the web site resides on the internal network. The firewall has a 1-to-1 NAT mapping an external address to the internal, and it allows port TCP:443 to the external address. That is the only port needed, which makes this such a nice solution. Port 443 is allowed out from virtually every other internal network in the world. Obviously, there is also an external DNS entry for the fully qualified domain name of the gateway, the one that matches the digital certificate and the URL.

End-User Experience
There is a portal web page in the client’s DMZ. On it are two links: one for web-mail to the cloud hosted Exchange and the other to the remote access web site described above. This link is the URL which matches the digital certificate.

Once connecting to the remote access site, the user is prompted for their AD (Windows domain) credentials. The domain name is required here, so the user must either type in “Domain Name\User Name” or in our case we have trained them to use their e-mail address. (We have an additional AD User Principle Name matching the domain name used for e-mail, and each user account has this set as their User Logon Name.)

They see the instructions and if they have not used this before from their current computer, they click on the “One-time setup” link, which imports the registry file. Then they click on the “List of computers” link, and from there they select their chosen workstation. Again, the computer must actually be at the office, connected, and powered up. Once they click the link, they are prompted for user credentials a second and last time from the RD Gateway, and then they are presented with their full Windows Desktop at work.

Users must be trained to Log Off when done and not to simply close the window. It keeps things more seamless and clean.

Limitations
This is quick and basic solution that uses a convenient and secure method of connection. Limitations:
  • The workstation must be in the office, connected, and powered up. A user can either take their laptop home or leave it in the office for accessibility via remote access, but not both.
  • This solution is geared strongly for Microsoft client computers. The registry import file is specifically for Internet Explorer. Mac users must be given a Mac-equivalent RDP file already configured. It connects directly to the RD Gateway, bypassing the more user friendly and centrally manageable remote access web site.
  • Users must log on twice.
  • There are still some features available in a Citrix solution that are not available in a pure Microsoft environment.
Next Steps
The next step would be to add an RD Server, complete with the RD Session Host, RD Licensing, RD Connection Broker, and RD Web Access role services. The existing gateway server would continue to be used as is and it would also host the additional role services apart from the RD Server itself. Once users become used to connecting to the RD Server only, we could do away with access to individual workstations. Single sign-on would be available, but it would still be geared strongly to Microsoft clients.

The best solution, of course, is Citrix's XenApp solution, complete with their Access Gateway product. The existing gateway server would have its RD Gateway roles removed, but the server could then host the internal portion of Citrix’s Access Gateway and also its Web Interface. Citrix provides a more seamless, configurable, and flexible solution over-all, and it works very well from Mac or Unix clients.

Friday 6 April 2012

London Olympics: Gearing Up Remote Access Systems

The London Olympic Games and Paralympic Games 2012 are approaching at lightning speed. The former run from July 27th to August 12th and the latter from August 29th to September 9th. The official web-site informs us that "the transport network will be significantly impacted during the Games". The relevant numbers:
  • current trips per day on the London Underground: 3.5 million
  • expected additional trips for the duration of the games: 20 million
  • expected additional trips on the one busiest day: 3 million
Almost double the number of commuters will travel on the busiest day! We've seen what happens if there is too much or the wrong type of snow, or too much heat, or too much rain, or someone falls on the track or takes ill, or any hundreds of other events: the system virtually grinds to a halt. Other modes of transport then become overburdened making the transportation experience in the capital anything from unbearable to impossible. I imagine that we will see scenes that will echo the exodus from the City the morning of the July 7th bombings in 2005: people queuing for miles for any transportation possible - to anywhere. There will be the helicopter shots from overhead to show this on television.

Roads will also become even more difficult dealing with general overflow, but also thanks to the ORN and PRN (Olympic and Paralympic Road Networks), roads closed to the public to allow speedier transportation through the city for athletes and "officials".

image of a motor scooter
As our business involves consultants going on-site to client offices if we cannot do our work remotely, this is my plan for dealing with issue: I'm going to buy a motor scooter.

At other businesses, however, staff may be able to work from home. And here is my prediction: that we at Stepney Marsh Systems will be working all-out during the month of June installing or upgrading remote access systems to allow employees to work from home this busy summer.

I have a refreshed a very old document (listed below in the resources) I wrote almost ten years ago about remote access methods. There are a few more options now and also a few that have become obsolete. The basic methods remain the same, as do the security considerations.

Resources:
Now is the time for London businesses to analyse their upcoming remote access requirements. Are their current solutions sufficient?

Thursday 5 April 2012

Remote Access: Methods & Options

INTRODUCTION
Remote access technology allows users to connect to their office computing resources while out of the office. Often, this may only be to access e‑mail via web-mail using a browser, but it can also comprise of access to all office applications and data. Such solutions could be accessed from the home, hotels, other offices, Internet cafés, or even outdoors via the mobile phone networks from devices such as the iPad.

Remote access can also be used for connecting a small remote office to a main office at low cost, or can be used as a form of disaster recovery. For example, if an office becomes inaccessible, users could continue working from home. No "hot site" office needs to be arranged.

Before we get into a discussion of typical remote access solutions, we need to understand the two primary methods of remote access: remote node and remote control.

Remote Node
This solution simply extends the office network to the remote Personal Computer (PC) or mobile device. The remote PC becomes a node on the main network. This is typically accomplished using a software-based Virtual Private Network (VPN) on the remote PC so that it can connect from any Internet connection.

Pros
  • seamless integration for laptop computers; users have the same computing environment whether at the office or on the road
  • easy to transfer files between the remote computer and the office
  • Implementation costs are usually lower than a remote control solution, as even low-end firewalls come equipped with a VPN connection these days.
  • The user can work on the PC without a connection to the office.
Cons
  • Software identical to that on the office workstations is required on the remote PC in order to read corporate data. For example, to log onto an SAP application server, the SAP client software must also reside on the PC.
  • VPN software must be installed (if not using the native Microsoft client), maintained, and supported on all remote PCs.
  • "Unmanaged PCs" (those that are not maintained by the corporate IT team or consultants) bring the risk of exposing viruses onto the corporate network or inadvertently connecting that network with yet another network to which that PC may also be connecting.
  • Because of the previous requirements, users typically must use their own "managed" PC or a company laptop, one that has been configured correctly.
  • The previous requirements greatly increase IT resource requirements in order to support all remote PCs. This can be a more expensive solution in the long run than remote control.
  • Because VPN and identical office software must reside on the PC, this solution would not enable seamless access from an Internet café or at a non-configured PC at another business’ site.
  • Unless the VPN is using the SSL protocol (one that is also used to access secure web sites), it is likely that the client VPN may not work from inside another business’ network, as their firewall would likely block such protocols from leaving their network.
  • Database applications and access to files stored on office file servers may be slow performance, as the data needs to be transferred across the remote connection to the PC.
Remote Control 
A computer is remotely controlled at the office and configured with the business’ standard applications. This could be a user’s primary workstation or a spare. A workstation can only be used by one person at a time, whether local or remote. Alternatively, a purpose-built server can also be simultaneously and remotely controlled by many users. This would typically be a Microsoft Windows Server with Terminal Services installed, possibly also with Citrix software. Users log onto the computer and are presented with either a Desktop window (a Desktop within a Desktop) or a full-screen Desktop which "replaces" their local Desktop.

Pros
  • Because the user remotely controls a session on a computer in the office, no data is transferred; therefore the performance is virtually the same as being in the office, even via a slow connection.
  • If the correct remote control solution is in place, users can access office applications from any PC in the world (including from an Internet café) as long as it is connected to the Internet and has a web browser; no laptop or mobile device is required for travelling.
  • Unmanaged PCs may also safely connect using this method, as there is no direct data link between them and the office network. The network is safe from any possible viruses on the client computer.
  • No other software is required on the remote PC so ongoing support costs for these PCs are almost nil.
Cons
  • This can be a significantly more expensive solution to implement than a remote node solution.
  • An Internet connection is required in order to do any work. For example, it would be inaccessible from a laptop computer on an airplane (although this is changing).
    Other Terms
    The term "Desktop" with a capital "D" in this document refers to the screen that Windows presents to the user on the monitor: the background, the Task Bar, the System Tray (bottom right corner by the clock), the Start Button and Menu, and the Desktop icons.

    "Remote Desktop" is the user-friendly name for Microsoft’s Remote Desktop Protocol (RDP), the software and protocols to remotely view and control a Desktop on another Windows computer. The initial screen is shown to the right. Sometimes the remote Desktop is maximised so it fills your entire local Desktop. It appears as if you are working on the remote computer locally. Some users do not perceive the difference.

    Sometimes the remote Desktop is a window in your local Desktop, as shown below (the remote Desktop with the blue background is a window within the local Desktop with the black background):


    OPTIONS
    Pros and cons listed in the sections below are in addition to the more general points above.

    Hosted Remote Control
    Third-party companies offer a service where an agent (a piece of software) runs on the office workstation which connects to their hosting servers. Users then log into their server via a web-site from any Internet-connected computer and are presented with a window that is the Desktop of that office workstation. These are typically paid for by annual subscription. Products include Citrix’s GoToMyPC or Bomgar.

    Pros
    • reasonably cheap to buy and use
    • very easy and cheap to implement for small numbers
    • access from any PC
    • easy to use
    Cons
    • need the controlled PC to be available, powered on, and not being used by anyone else
    • would get expensive once many users are connecting to the office network
    • not as secure as in-house solutions
    • difficult to manage, and therefore more expensive, for a large number of workstations
    VPN Only
    A piece of software, either third-party or Windows’s built-in Connect to a network wizard, is launched from the client computer. It provides a virtual connection to the firewall on the office network, and thus to the internal office network. This is the only remote-node option presented as an entire solution (there are other solutions below which may provide a remote-node solution as well as remote-control).

    Remote-node is the only method where users may access resources on the office network other than using remote control software. For example, a copy of the logon script may reside on the user’s local Desktop, which they can click and, voilà, mapped network drives exist, just like at work. Remote Desktop may also be used. The pros and cons to this are sufficiently described in the introduction above.

    Remote Control via a Gateway
    This solutions is in many ways similar to the Hosted Remote Control option described above. The gateway to which the client connects, however, is located at the office network. Furthermore, the gateway usually uses the internal user database with which to authenticate users, so they may log on with their usual Windows network user name and password. The products chosen here are Microsoft’s Terminal Services Gateway (TS Gateway) and Terminal Services Web Access (TS Web). These components are both included with the Microsoft Windows Server operating system; they do not need to be purchased separately.

    Users access workstations in the office by browsing to a secure web-site, logging on with their usual credentials, and then selecting from a list of available computers on the network to remotely control.

    Pros
    • easy to use for end-users
    • reasonably cheap to set up and maintain
    Con
    • need the controlled PC to be available, powered on, and not being used by anyone else
    Terminal Server
    The connection technology and method uses the same TS Gateway and TS Web as described above. The only difference is that there is also a dedicated Terminal Server on the corporate network which users remotely control. Think of it as a Windows workstation with the sole difference that many users can remotely control a session on it concurrently. The TS Web may be set up to only allow remote users to the server or it may continue to allow connections to individual Windows workstations.

    Pros
    • The main benefit is that users no longer need to have a workstation available to which to connect.
    • even easier for end-users to use; it can be configured with a single icon of one remote server to select from the web page
    • it provides a built-in form of disaster recovery, many users can work from home or other locations if the office becomes physically inaccessible
    Con
    • As many users may be using this server at one time, it must be reasonably powerful, which adds significantly to the cost of the whole solution (whether in price for a new bare-metal server or resources for a virtual machine).
    Citrix
    Citrix is combination of hardware and software components that add on to a Terminal Server and remote access solution. The software would be installed on top of the Terminal Server described above, using the same hardware resources for that piece. Other software would replace the TS Web component on the utility server described in the Gateway section above, though installed on the same physical (or virtual) server. A hardware component called the Citrix Access Gateway (CAG, which may now also be called the NetScaler Access Gateway) sits outside the network firewall and performs similar functions to the Microsoft TS Gateway component.

    Citrix makes the whole solution a bit more seamless for the end-user. Single sign-on will always work, even with two-factor authentication described below. Single applications can be "published", that is, an icon is presented on the web page after logging in that can run one application only, and it would appear to be running locally on the workstation. Citrix has its own remote control protocol, ICA (Independent Computing Architecture) which is a little faster than Microsoft’s RDP, and handles features such as remote printing, sound, and colour depth better than RDP.

    If the solution were to ever grow to two or more Terminal/Citrix servers, Citrix also seamlessly deals with load-balancing user sessions among the servers. It does this in a much more flexible and manageable way than Microsoft’s method.

    Pros
    • best seamless experience for the end-user
    • fastest performance
    • much easier to grow, both in scope and in maximum concurrent users
    • infrastructure in place for in-house application publishing, another topic involving savings of managing applications
    • This is also the most secure solution, as the CAG can be configured to do numerous checks against the client computer before allowing it to connect. Microsoft has a similar offering, but Citrix has always worked on all platforms (Mac, Linux, hand-helds), and not just on Microsoft clients.
    Con
    • most expensive solution, both in terms of product pricing and consulting required for installation
    SECURITY
    Any type of remote access solution enables some form of connectivity to the office, which becomes a potential weak area in network security. All methods above connect over a secure connection, which means that if someone were to capture that network traffic, the contents would all be encrypted. While it might be possible to crack these secure sessions, this is not the real problem.

    The trouble is someone logging on, pretending to be an employee or partner, and then having access to your resources (over their own "secure" connection). Whether they crack your password using brute-strength utilities, retrieve the username and password that is stuck on a monitor or under a keyboard while strolling through the office posing as a visitor, or obtain the credentials using social engineering - this is a much easier method to illicitly gain access to a corporate office with remote access enabled. This risk is true for any remote access method.
    • Hosted Remote Control aids this issue in that the end-user needs two sets of credentials: those to log onto the provider’s web-site and those to then log onto their workstation. A new risk is added, however, in that we must now also trust the security of this provider.
    • VPN is a somewhat risky solution because not only could someone log in as the identity of a valid user, but they could possibly download and then delete all data from the network if that user ID had sufficient rights. Computer viruses and other malware can also be uploaded onto the network in this manner.
    • Remote Control via a Gateway, Terminal Server, and Citrix are all remote control solutions using in-house resources, so their risks are the same. In fact, these are the least risky solutions.
    All solutions would gain added security with a two-factor authentication mechanism. This is where the user possesses a physical token, often in the form of a key fob that either shows a constantly changing number to be typed in or contains a key that plugs into most computers. For the user to gain access to corporate resources, they must provide something they know, their user name and password, and something they possess, the number on the token or the physical key.


    Different technologies exist to provide a reliable and secure connection to corporate computer resources from virtually any location on the planet, each with their pros and cons. The benefits are numerous, but such solutions must be implemented properly and with sufficient respect to network security.

    Tuesday 29 November 2011

    SME IT Consulting: the End of an Era

    Am I part of the last generation of small-to-medium enterprise (SME) IT consultant? Will this consulting, as I know it, be over in a few years? I think it will be, and probably sooner than that. I think that cloud computing heralds the end of the on-site visit from the IT consultant.

    We have two types of clients. The first consists of larger firms where IT managers hire us to either provide holiday cover for in-house IT support staff or to provide specific technical consulting in areas where their staff may be lacking knowledge or too busy to handle. The second type of client is the smaller firm where we are also the IT manager and we cover the broad range of IT for the customer.

    It’s this second type of consulting I think will go away and it’s this second type of client I will be discussing for the remainder of this article.

    As it is today, we build a new network which consists of servers and their applications, the in-house phone system, workstations and their applications, smart phones, shared file access, remote access, Internet access and Internet protection. We then provide on-going and ad-hoc support; drop in for monthly maintenance checks; we may move the office from time to time; and are called in to discuss, plan and implement the occasional new project.

    All of our clients run Microsoft Windows servers, Windows workstations, Microsoft Office, Microsoft Exchange (e mail) server, and the occasional Microsoft SQL (database) server. There are occasional extra server based business applications, once in a while a custom-built application, and Bloomberg shows up fairly regularly. The all-in-one or few servers provide workstation management, network services, file and print sharing. iPads are springing up everywhere.

    But consider the following:

    • We already provide remote screen-sharing support where we can see the user’s Desktop and show them how to do something, moving their mouse cursor for them. We can provide useful and painless remote support rather than having to be on-site.
    • We have already moved all of our clients to hosted pay-as-you-go (cloud) Exchange. We no longer support any Exchange servers directly. We continue to manage the resources via web front-ends to the Exchange resources (shared calendars, resource mailboxes, forwarding, etc.), but we never actually touch an Exchange server anymore and we haven’t built one in years.
    • We will shortly have all of our clients backing up off-site on-line over the Internet - no more on-site tape drives, tapes, and backup software.
    • We already have one client that is using a Voice-over-IP hosted phone system - no more in-house telephony systems.
    • Cloud computing already offers Microsoft Communicator or Lync services, SharePoint, SQL, and entire server platforms, virtual or bare-metal.
    • We’re considering moving the entire Windows Desktop for two of our smaller clients into hosted Windows/Citrix sessions. The price has come down and the users would actually have their own company “network” environment hosted (shared and personal network drives, just like now).

    It’s hard to argue the benefits of most of these cloud services, especially when it comes to pricing. So far, hosted Exchange has proved an absolute no-brainer in terms of its cost savings. Other benefits include built-in remote-access capability and usually Tier 3 data centre redundancy, which means there are already multiple Internet links to the resources which are backed up continuously to other sites. It means that it would take a metropolitan-wide disaster for you to lose access to your data. We can’t build that level of high-availability resiliency in-house for small to medium size enterprises. (Well, we can, but the proportionate cost would be enormous.)

    Given the trends already historical and present; given the resources already available in the cloud; given the almost non-existent implementation costs to the client; and given the other technical advantages described above - how is it possible that most SMEs will not move to the cloud?

    The only IT resources left at the client office will be the workstations with operating systems or possibly even thin-client devices, telephone hand-sets, a local network, a firewall, and an Internet connection that is larger than it used to be. The networking and firewall equipment is already the type of kit that is configured off-site and delivered to the client where someone merely plugs it in and turns it on. Given the simplicity of the workstations or thin-client devices used for remote Desktop configurations, they will also be able to be configured remotely and delivered.

    So where does that leave the friendly consultant who drops in every week or so? Working an eight hour shift at a hosting centre! All of the skills used before are now needed by the hosting centre: software knowledge; determining client needs; providing support to end-users; configuring e-mail, file sharing, and phone system requirements; and providing end-user support by phone or screen-sharing sessions.

    It no longer makes sense to renew all the expensive server hardware for a small network. Given that the standard hardware extended warranty period is three years, that is my prediction when most services will be moved to the cloud for small business.

    Once this three year period (at the outside) has passed, a new office IT setup might perhaps consist of an on-site visit by a salesman. However, after that, items will merely be delivered pre-configured: firewalls, network switches, and cheap PCs or thin-client devices. Maybe a junior techie/delivery person in jeans will arrive and connect it all together. From then on, it’s telephone or messenger conversations and remote support and configuration.

    It’s the end of an era. It started when we moved from giant mainframe computers to distributed workstations about 20 years ago and the end is in sight. I’m convinced of this. While we will still have our technical consulting to IT managers for a while, I’m looking at options to try and prepare for this change to the small business side of things. I suggest that other IT consultants in similar areas do the same.

    Thursday 9 June 2011

    4 Real-World Examples and Prices of "Cloud Computing" for 1 Start-up

    Yes, I still sometimes put "cloud computing" in quotation marks. For the most part, I do still agree with Larry Ellison's earlier derision of the term. I've had an e-mail account for 18 years. This means that I have used cloud computing for this long and set up small instances of it hundreds or thousands of times by now. That said, I will embrace this marketing term and use it frequently, as this is what people want to hear. Some clients take great pleasure in now being able to brag that they are using cloud computing.

    Aside from the term, it is growing more useful all the time in the real world. Hosting companies and resellers have created stable thorough and granular web-based control panels that allow clients or their IT consultants to set up, control, and manage their hosted subscription based applications (cloud computing). It is especially useful for companies starting up preferring pay-as-you-go or pay-as-you-grow monthly fees over purchasing relatively expensive hardware and software to provide the same thing.

    We have a client in private equity. He started off as a one-man shop not wanting to invest much at all in infrastructure until he knew where his business was going. Cloud Computing Instance 1: we set him up with a domain name, a one-page web-site, and POP3 e-mail - £96 per year plus a couple of hours of consulting to set it up and document it. (I realise we could have gone cheaper, but we always use hosting companies that have quality help desks available by telephone 24 hours a day, answered by knowledgeable people who speak clear English.)

    After a while, the client wanted a bit more from his e-mail system, mainly seamless BlackBerry synchronisation. Instance 2: we upgraded his e-mail to hosted Exchange (Microsoft's e-mail server product). Instance 3: we also set up hosted BlackBerry Enterprise Server (BES) at the same hosting company and integrated it with his e-mail. This now gave him an e-mail solution that he would experience in any medium-to-large firm. E-mail, contacts, and calender are now synchronised seamlessly between his corporate laptop, his home computer, his BlackBerry, his iPad, and web-mail. The annual cost then for cloud computing was £236 plus a £5 set-up fee and another three hours of consulting (mostly migrating mail and setting up client devices).

    The client grew and we needed to build him a new office, initially for 5 users but built to handle 15. We're just finishing that off this week. We did build them some in-house computing. They now have one server that is the file server, print server, central backup point for data on laptops, anti-virus program distribution and management, provider of Windows networking services, authenticator for the Virtual Private Network (VPN), distributor of Microsoft updates, and central control for many Windows settings. You could technically call all of this a "private cloud" - applications hosted in-house, just like any network built ever since companies had servers or mainframe computers in an office.

    However, we're still using cloud computing, and if that is the case I suppose you could say we are using "mixed cloud" or "hybrid cloud" computing, with our cloud and private cloud. We're still using the hosted Exchange and BES services, now with more users, extra added disk space, some resource mailboxes so they can book boardrooms on-line for meetings, some shared calendars, external contacts defined, and distribution groups. We control all this for the client with the hosting reseller's web-based control panel, with occasional help from their excellent customer support. We get the request from the client, decide best how to fulfil the request, log on and set it up, and the bill for hosted Exchange simply goes up a notch on the client's monthly credit card bill. (They also receive an invoice from us, but that would be the case for in-house e-mail as well.)

    At the moment, their annual hosting fee for e-mail and BES for 5 users and all the extra resources mentioned above is about £1,320. Compare this to the cost of setting up an internal Exchange and BES server: £9,700 for hardware, software, and consulting fees to build it.

    Added benefits for this solution is that their e-mail data is already hosted in a Tier 3 data centre with redundant Internet links, servers, and sites. It is also backed up. This is all "free" and invisible to the client, and even to us, the IT consultants. It's also automatically and securely accessible from the Internet, something else we would have had to build in-house, so an in-house equivalent would really cost much more than £9,700 if we were to compare apples to apples.

    That said, we still needed to deal with backing up the rest of the client's data, their personal and shared files, as well as the in-house server itself. The laptops are backing up their data to the server, but we went with Cloud Computing Instance 4 to back up the server. The client currently has about 100 Gb of data to back up, which includes the server itself. The backups run every night, first to a local cheap external USB hard drive, and then securely over the Internet to the backup hosting provider. (A technical prerequisite for this is a synchronous Internet link, where the upload speed is the same as the download speed. An Asynchronous DSL link won't quite suffice.) One fear about on-line backups such as this is Internet connectivity. What if the Internet link goes down exactly when we need to restore some data? (More likely to happen than you would think in a disaster scenario) The backup software, which is provided "free" by the hosting company, first restores from the local USB disk, as that would be faster in any event. Only if that fails, does it restore back down from their data centre.

    This costs £1,800 annually for the 100 Gb backup plus a £150 set-up fee and four hours of consulting to configure it all. If we decided to install in-house backup software and a tape drive, it would cost anywhere from £2,000 (I'm aware that there are über cheaper options out there, but we don't consider them for a business environment.) to £10,000 if the client wanted a tape library. There would also be an annual fee to store the backup tapes off-site and the tapes themselves would need replacing every couple of years. An additional benefit of hosted backups is that there are no tapes changes or off-site schedules to deal with. We receive e-mail alerts of any problems, check the backup logs weekly, and do a test restore annually, just as with any other backup system.

    So this company hosts their most critical application, e-mail, and runs IT's most critical role, backups, in a cloud computing environment. It's invisible to the users and it makes our lives as consultants easier, which translates to less consulting fees to the client. This company of 5 users with 100 Gb of data currently pays £3,120 annually for this. Set-up and consulting fees to configure this were roughly £1,055, a fraction of the cost to build it all in-house.

    Going forward, we will continue to use these services for a while. I intend to re-examine the costs for hosted backups at 500 Gb and for hosted e-mail and BES at 20 users. Hosting centre salespeople assure me that it will be a limitless savings curve no matter how high it goes, however I will report my own findings on that another day.