**Updated 3/26/09 with preface
[The following article is authored by one of the Windows Embedded MVPs (Most Valuable Professionals). Our MVPs have a heavy background in Embedded systems and are a great repository of information on Windows Embedded products. We’re providing this space on our team blog as a service to our readers by allowing MVPs to share some of their knowledge with the rest of the community.]
To be very clear, creating a custom software update or servicing solution is not something that should be taken lightly. It may look relatively easy at first glance and this makes it tempting to try, but many device projects have been derailed by custom solutions.
The reason for this is that the development of the required infrastructure is quite costly. Testing costs should not be ignored, and are very often underestimated. Depending on the project, there are only a few scenarios that would justify the development of a custom solution. Among those are images with a very small footprint or scenarios where a solution provider does not own the Windows Embedded Standard image, but needs to update his software on the device. On small images there may be not enough space left for the requirements of a commercial device management system, although DUA is a solution that is optimized for this scenario. There may also be special environmental settings, such as exotic network environments, that cannot be satisfied by a currently available product.
If you move forward with developing a custom solution, the Windows Embedded Standard devices it will run on may offer more functionality to assist you, because then there may be some Windows infrastructure parts such as Windows Installer, BITS that are already part of the run-time image and can be reused for a custom servicing solution.
Things to remember
First things first, let’s look at the basic principles. A device management system needs to provide:
· Robust transport mechanisms
· Flexible installation mechanisms
· Task scheduling
· Targeting options
· Status reporting
For transport, a client-server architecture using HTTP/HTTPS or FTP certainly is state of the art. A client application on the device connects to a backend server to get instructions as well as binaries. One should always use a “download and execute” approach for the distribution, because this is the most reliable one. The Background Intelligent Transfer Service (BITS) could be leveraged for these tasks, if part of the image. In addition, it should be possible to schedule the update packages so that they are delivered to the device during a specific maintenance window when devices are running idle and, therefore, normal tasks are not disturbed by the upgrade. When choosing installation mechanisms, it is a best practice to use any available local infrastructure on the device, for example Windows Installer Service. Windows Installer Service provides a robust setup execution environment that is also able to do rollbacks, if something goes wrong. These kinds of features get very expensive, when implemented on your own.
It is also good to have an overview on the state the field devices are in; - at least rudimentary reporting on update success, failures and delivery. There are times when you may need to release a patch for just a subset of the devices. This means there should be functionality in the servicing mechanism for targeting dedicated groups of devices.
Last, but not least, your system should not fail when your solution is successful. This requires the built-in ability to scale to a larger environment. It also should include some fuzzy logic that prevents all devices downloading a patch at exactly the same time, which could, for example, bring down the server. The solution should introduce some jitter around the set download time to avoid this example.
Good change management solutions require strong back-ends. They have to be available, robust and scalable, which means that this cannot be a single server setup. Instead, modern clustering technologies such as IP- load balancing and failover clustering, such as that provided with Windows Server 2008, need to be implemented.
Updating the updater
It should not be forgotten that occasionally the local client for the servicing mechanism needs updates as well. This can be achieved by implementing a dedicated update functionality for this client in the “normal” application or to have an additional client program that gets called from the standard update client to update itself, after the new client has been delivered to the machine.
TEST, TEST, TEST
While this is true for any change management system, it is especially important for custom systems. One needs to test the packages to be delivered, as well as the underlying servicing infrastructure. I stress this heavily, because a lot of costly errors are avoided by good and well-structured tests. A staging line for testing should be established in parallel with the production line. In addition you should create a well-structured checklist of processes that need to be executed for every update before approving for release to. The tests should also be done by dedicated test personnel, not the application developers. We all know that things always work on a developer’s machine, right?
Think twice before implementing change management solutions
Getting back to my warning at the beginning, I hope I could provide some insight why it may not always be a good idea to implement a custom update solution on your own. If you do see the need, be aware of the risks and the additional costs. There are custom solutions out in the market that already provide a lot of the required functionality and can be customized to a certain extent, as well (for example IBM Tivoli, HP OpenView, Altiris, CA System Management, NetOP), in addition to Microsoft’s offerings of DUA, WSUS, SCCM. It may be an option to have a closer look at them.
PingBack from http://www.clickandsolve.com/?p=9028