Wednesday, October 10, 2007

DebugActiveProcess and DEBUG_ONLY_THIS_PROCESS

Then attaching to a process to debug it the default debugging flags is DEBUG_ONLY_THIS_PROCESS, meaning that if the process calls CreateProcess the child process will not be debugged. But if the process is started by the debugger the process can be started with DEBUG_PROCESS meaning that both the new process and any child processes created will be debugged by the same debugger.

I have been locking for a way to change this DEBUG_ONLY_THIS_PROCESS flag since Application Inspector is able to attach to a process and I want it to monitor all child processes as well.

A couple of days ago I stumbled on a comment on a Microsoft descrition on that was new in Windows XP, talking about Dynamic Control over Debug-child Flag. The description did not describe how to change the flag so I had to investigate.

After some debugging of windbg and dbgeng.dll, that can change the setting using the command .childdbg I found the kernel call to do that I want. It's the undocumented NtSetInformationProcess that is able to change the debug flags.

Here is the code to set the flag.


//
// Process options.
//

// Indicates that the debuggee process should be
// automatically detached when the debugger exits.
// A debugger can explicitly detach on exit or this
// flag can be set so that detach occurs regardless
// of how the debugger exits.
// This is only supported on some system versions.
#define DEBUG_PROCESS_DETACH_ON_EXIT 0x00000001
// Indicates that processes created by the current
// process should not be debugged.
// Modifying this flag is only supported on some
// system versions.
#define DEBUG_PROCESS_ONLY_THIS_PROCESS 0x00000002

typedef enum _PROCESSINFOCLASS {
ProcessDebugFlags=31 // From ntddk.h
} PROCESSINFOCLASS;

typedef DWORD (CALLBACK * NTSETINFORMATIONPROCESS)(
IN HANDLE ProcessHandle,
IN PROCESSINFOCLASS ProcessInformationClass,
IN PVOID ProcessInformation,
IN ULONG ProcessInformationLength
);

static DWORD DebugSetProcessOptions(HANDLE hProcess,ULONG DebugFlags) {
static NTSETINFORMATIONPROCESS g_NtSetInformationProcess = NULL;

if (g_NtSetInformationProcess == NULL) {
HINSTANCE hNtDll;

hNtDll = LoadLibrary(_T("ntdll.dll"));
if (hNtDll == NULL) {
return GetLastError();
}

g_NtSetInformationProcess = (NTSETINFORMATIONPROCESS)GetProcAddress(hNtDll,"NtSetInformationProcess");
if (g_NtSetInformationProcess) {
return GetLastError();
}
}

return g_NtSetInformationProcess(hProcess,ProcessDebugFlags,&DebugFlags,sizeof(DebugFlags));
}

Wednesday, September 5, 2007

New workhorse

I have just finished assembling my new workhorse in the home office. I have a server running VMware mostly and it has been a little slow lately. But now after upgrading motherboard, CPU and memory it's a lot quicker.



The spec. for the new machine:

  • AMD Athlon X2 6000+

  • aBit AN-M2HD motherboard

  • 2GB Crucial BallistiX PC640

To keep the system quiet I also ordered a Thermaltake BigWater 745 water cooling system. The picture shows everything installed in the case.

Because the case has an unusual design with the drive bay for up to 4 hard disks in the bottom of the case I had to place the water pump on the top instead. This does the noise of the pump a little bit louder, but not too much. I have not done any real tests of the cooling yet, more information on the resulting temperatures will follow on this blog.

For the new system I decided to change the host OS as well. I'm now running Ubuntu 7.04 (Fieasy Fawn) amd64. To simplify the migration I installed Linux to a new disk and created a VMware machine from the old hard disk. To be able to boot the machine I had to change the newly created VMware machine to use IDE interface for the disk. This was a simple edit of two config files; first in the vmdk file the adapterType had to be changed to ide, and in the vmx file all scsi0 entries had to be changed to ide0. After the old machine was booted virtually I converted the machine to a standard file based VMware image using VMware Converter.

Wednesday, June 13, 2007

VMware kernel debugging

Since I'm mostly travelling around between my different customers my laptop is the only computer I have around. To setup test environments for the different projects I use the free VMware player and WMware server products to create virtual machines, in which I can test different solutions.

One of my last projects was to find a memory leak in a kernel mode driver. To do this I needed to do some kernel mode debugging again, something that normaly involves two computers connected by a serial cable. But not any more. It's a simple thing to setup a kernel mode debugging session using VMware.

First we need to get VMware to export a COM serial port to the host. This can be done through a named pipe and some lines added to the VMware configuration file for the virtual machine.

serial0.present = "TRUE"
serial0.fileType = "pipe"
serial0.fileName = "\\.\pipe\com_1"
serial0.tryNoRxLoss = "TRUE"
serial0.pipe.endPoint = "server"

We also need to enable kernel mode debugging in the target OS. This is done by editing the c:\boot.ini file for the virtual machine OS. Start by copying the current startup line, and add the /debug, /debugport and /baudrate startup arguments. My boot.ini looks like this:

[boot loader]
timeout=30
default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise DEBUG" /noexecute=optout /fastdetect /debug /debugport=com1 /baudrate=115200


Now boot the virtual machine and select to boot into the DEBUG version. During the boot windows will stop and wait for the debugger. To start the debugger we give the arguments to connect to the named pipe that the VMware player now has created.

windbg.exe -b -k com:pipe,port=\\.\pipe\com_1,resets=0"

The boot will stop on a breakpoint during bootup, just enter g and press enter to continue.

To shorten the round-trip then doing driver development it's very nice to be able to change the driver directly on the target machine. This is since all bugs in kernel mode ends up in a BSOD (Blue Screen of Death). To do this a VMware utility called DiskMount comes very handy. With this utility it's possible to mount the virtual machine hard disk on your host computer and change the driver before the next boot.

Tuesday, June 12, 2007

Java Web Start connection problem

Anyone who has deployed any large application using Java Web Start 1.5 or 1.6 over a slow WAN or through some kind of SSL proxy has noticed the extreme number of connections that can be initiated during application startup. The first deploy of the application works as expected and all the JAR files is downloaded. But the sencond time the user clicks to start the application, Java Web Start will flood the network with connections to the server.

Investigation the problem revelas an interesting piece of code in the class com.sun.javaw.LaunchDownload and the method updateCheck, this function is called once for every JAR file in the JNLP description file. The code does the update check by starting a new thread and doing a HTTP request to the web server to check if the JAR file has been updated.
This means that if we have a JNLP file with 200 JAR files, the code will start 200 threads, creating 200 individual connections to the server!

The solution can be found in the same function as well, as the first lines of the function reads:

// no update check for versioned resource
if (version != null) return;


This means that if we put version attributes on the JAR files we will not invoke the badly written code. To add the version attribute is simple, just add it to the JAR element of the JNLP file:

<jar href="/application/foo.jar" version="”1.0” />


When Java Web Start finds this attribute on a JAR element in the JNLP file it will send this version string along with the GET request to the server. Like:

GET /application/foo.jar?version-id=1.0

To handle this on the server we have two options, if we are running a J2EE environment the JDK contains a JNLP servlet that can respond to this request and also return the required x-java-jnlp-version-id custom header, or we could implement the same functionality in some other kind of server side language.

I choose to implement the JAR server in PHP/Apache since we have noticed that the J2EE container we are using is not very good at serving large amounts of data.

To have Apache invoke my script for every JAR file requested from the server, I added the following lines to the httpd.conf file.

AddType application/java-archive .jar

Action application/java-archive /cgi-bin/jar_send.php

The script is very simple; we have chosen to store the versioned JAR archives
as /application/foo_x_y.jar for version x.y. This makes the script very simple to implement, and make deploying very simple as well. Another strategy could be to store a complete version of the application under /x.y/application/foo.jar and have all JAR archives in the JNLP file reference the same version.

Here is the code for the jar_send.php script:


<?

// Make sure we have got a version-id argument
if (isset($_GET["version-id"])) {
$version = $_GET["version-id"];
} else {
$version = null;
}

// Retreive the requested file
$file = $_SERVER["PATH_TRANSLATED"];

// If not version is requested, or version is 1.0, send foo.jar
if ($version == null || $version == "1.0") {
$path = $file;
} else {
// If version 1.1 is requested, send foo_1_1.jar
$x = strrpos($file,".jar");
$path = substr($file,0,$x);
$path .= "_";
$path .= str_replace(".","_",$version);
$path .= ".jar";
}

// Make sure the file exists
if (!is_file($path)) {
header("HTTP/1.0 404 NOT FOUND");
print $path;
die;
}

// Open the file
$f = fopen($path, 'rb');
if ($f == null) {
header("HTTP/1.0 404 NOT FOUND");
print $path;
die;
}

// Send the JNLP custom header
header("x-java-jnlp-version-id: $version");

// Inform Apache about how much data we are going to send
header("Content-Length: ".(string)(filesize($path)));

// Send the data, 8K blocks
while(!feof($f) &&(connection_status()==0)) {
print(fread($f, 1024*8));
flush();
}

// Close the file
fclose($f);

?>

Monday, May 21, 2007

Java SSL client authentication with intermediate CA

Last week I helped a customer with a problem doing client authentication against a web-service. The connection did not work becuse the client software, implemented using JBoss was not sending in its client certificate during the handshake. The client certificate was issued by Verisign, witch uses an intermediate CA to issue the certificates.

The server side was using a Microsoft ISA 2004 server. We started with checking the list of trusted certificate issuers the server was sending out with the help of the openssl toolkit.

openssl s_client -host secure.customer.com -port 443
Look at the list after Acceptable client certificate CA names.
We concluded that the server was only sending a list of self-signed root CA certificates. The server had a lot more CA certificates installed but only a subset was presented as acceptable client authentication CA's.

In Microsoft ISA 2004 you don't have any means of configuring the list of acceptable CA's so we had to look at the other side in the communication.

The client software is implemented using JBoss 4.0.5.GA and JBoss remoting 1.4.3.GA but the as we found out, the problem was a general java problem.

When requested to present a client certificate the Java SSL classes uses an interface called KeyManager to locate the correct credential to use. The default implementation SunX509KeyManagerImpl does a search through all PrivateKeyEntry elements found in the configured keystore. Then doing the search the class will match all certificates in the entry against the list of acceptable CA's.

To get client authentication to work with a client certificate issued by an intermediate CA we need to have the complete chain present in the PrivateKeyEntry in the keystore.

To update a keystore with the correct certificate chain, we can use the openssl toolkit again. First get hold of all certificates in PEM format. To convert a DER encoded certificate to PEM format use:
openssl x509 -in file.der -inform DER -out file.pem
Then create a PKCS#7 encoded certificate chain with:
openssl crl2pkcs7 -nocrl -out chain.pem -certfile root.pem -certfile intermediate.pem -certfile clientcert.pem
How this complete certificate chain can be imported into the java keystore using the standard keytool program. But first we need to know the alias of the key we are going to update. We can list the content of the keystore with:
keytool -list -keystore keystore-file
During the import of the certificate chain, keytool will check that the certificates match the private key, so it's should not be possible to corrupt the keystore. But as always backup the file if your production depends on it...

To import the chain, use:
keytool -import -keystore keystore-file -alias alias -file chain.pem
Keytool will ask if you trust the certificate issuer, just answer Yes.

We can now check the content of the keystore again and make sure that the complete chain is present with:
keytool -list -v -keystore keystore-file
This command will do a verbose listing of the keystore, with all certificates.

Thursday, April 12, 2007

Ide2cf timeout with Linux

In my MythTV frontend I use a compact flash card as boot device, making the frontend into a diskless, silent media player. To connect the CF card to the system I use an IDE2CF adapter.

One annoying problem is that during boot, I get a couple of seconds timeout and some error output on the console. I think this comes from the IDE2CF card not supporting DMA correctly. The output I get is:

hda:hda: dma_timer_expiry: dma status == 0x21
hda: DMA timeout error
hda: dma timeout error: status=0x58 { DriveReady SeekComplete DataRequest }
ide: failed opcode was: unknown

I have searched the Internet and the only solution I have found said to include the ide=nodma commmand line parameter to the kernel, in the GRUB menu.lst file. That did not work, I still got the timeout.

I when decided to look into the source code directly, to try to see if I could find the error message and maybe see why the ide=nodma option was not working. I studied the code and found one interesting function. In the file ide-probe.c the code probes the IDE disks, and one call is to a function called ide_dma_check that probes the DMA capabilities of the drive. It seems that this ide_dma_check does it job and ignores the nodma command line option. My simple solution was to remove the call to ide_check_dma completely. I don't want DMA in my setup.

To patch your kernel you need to download the kernel source and the tools needed to build the new kernel.
My whole MythTV system is built on debian, so your exact procedure may be different if you use a different distribution.


#>apt-get install linux-source-2.6.18 kernel-package libncurses5-dev libqt3-mt-dev

Unpack the source.

#>cd /usr/src/
#>tar -jxf linux-source-2.6.18.tar.bz2
#>ln -s linux-source-2.6.18 linux
#>cd linux

Patch the ide-probe.c file to disable the DMA probing by searching for the ide_dma_check function call.

#>nano drivers/ide/ide-probe.c

You should find:


hwif->ide_dma_off_quietly(drive);
#ifdef CONFIG_IDEDMA_ONLYDISK
if (drive->media == ide_disk)
#endif
hwif->ide_dma_check(drive);
}

Just comment the call to hwif->ide_dma_check(driver) and the error message and startup timeout should go away. I admit that maybe the right solution is to look at the noautodma global variable, but this patch fixed my problem.

// hwif->ide_dma_check(drive);

Now build your new kernel.

#>make oldconfig
#>make-kpkg --initrd --append-to-version=-nodma kernel_image kernel_headers

After a (long) while the compilation will be finished and you can pick up two new .deb packages under /usr/src. One package for the kernel image and one for kernel headers. These new packages can be installed onto the frontend with:

#>dpkg -i linux-image-2.6-18-nodma-10.00.Custom_i386.deb

Now it's just to reboot.
Good luck!

Thursday, March 29, 2007

Template reports

Today I wrote a reporting tool for our Web 2.0 project. I wanted a simple solution that enabled the end customer to create his own reports based on the ones we creates during development.

To make the system flexible I used the Smarty template engine. Smarty works by compiling templates describing layout into PHP files that is when combined with data from an application into a presentation. The normal use of Smarty is to create HTML pages for viewing in a browser, but the code is well written and can be used in many more ways.

I also wanted the SQL queries to be configurable, to allow even more flexibility in the design of the reports. The solution was to use a feature in Smarty that is normally used to avoid hard coding in the templates, the config files. In my solution the report is a config file that specifies the content of the report. A simple report looks like this:



name = Users
desc = This report list all active users in the system.

[list]
Users = SELECT iuser.cn FROM iuser WHERE istatus = 'active'

[screen]
users = users_table.tpl

[print]
head = print_header.tpl
users = users_table.tpl
foot = print_footer.tpl

[excel]
users = users_table.tpl

The application can easily build a list of all the reports in the system by looking at all the files *.rep in the reports directory, using the glob PHP function. The reports supports to be rendered in three different ways, screen, print or excel.

To init the Smarty package, I use:

$smarty = new Smarty;
$smarty->compile_check = true;
$smarty->debugging = false;

$smarty->config_dir = "reports";
$smarty->template_dir = "reports";

$conf = new Config_File("reports");

And to get information about a specific report, I use the Smarty Config_File class to query the parameters from the .rep file:

$res["name"] = $conf->get($report_file,null,'name'); // Get report name
$res["desc"] = $conf->get($report_file,null,'desc'); // Get report description

To generate the report, the PHP code fetches and executes the different queries configured in the report. Two different kinds of queries are supported, queries returning lists and queries returning a single value.

$lists = $conf->get($report_file,'list');
foreach($lists as $name => $query) {

$res = iDbSelect($query);

$smarty->assign($name,$res);
}

$vars = $conf->get($report_file,'var');
foreach($vars as $name => $query) {

$res = iDbSelectOne($query);

$smarty->assign($name,$res);
}

After all the data has been read in from the database, Smarty is called to render the report. The $show variable holds the type of report to generate; this could be screen, print or excel. This makes in possible to have some parts of the report generated the same way independently if the report is intended for the printer, screen or for Excel.

$displays = $conf->get($report_file,$show);
foreach($displays as $display) {
$smarty->display($display);
}

By using this solution we can easily design reports made up from smaller parts that the customer later will be able to reuse in designing his own custom reports.

Sunday, March 25, 2007

History reloaded

My last post was about Really Simple History and how to get it to work in IE7. I forgot to mention the problems you easily get then using dhtmlHistory.js and IE if you serve your pages from a script language. This is because for dynamic history to work, IE must preserve the content of a form between page reloads. This can only happend if the web server returns a status code 304 (Not modified) on the page access.

In my case I used PHP as the server side script language to build my pages and had problems getting F5 to reload the page and stay on the current page. After a page reload, dhtmlHistory.js considered it a first load and the home page was shown again.

The solution I found after a while was to check if any of the pages had been modified and return the 304 status code from PHP. I also use an ETag header to make the checking for modifications simple.

The index.php file creates the HTML by listing all page_*.php files in a directory. For this I use the glob PHP function. After that I check for the file that was modified last, this is because this time will be used as modification date for the generated page. Then serving the generated page an ETag header will also be generated. This ETag will be constructed by using the md5 PHP function on the modification time.

On hitting F5 the web browser will send this ETag back in a header (if-none-match) to check for modified content. In PHP this header will be called $_SERVER["HTTP_IF_NONE_MATCH"] and the code checks if the current modification time is the same, if so a 304 status is returned.

Here is the code from index.php:


$pages = glob("page_*.php");

$mtime = filemtime("index.php");
foreach($pages as $page_file) {
$m = filemtime($page_file);
if ($m > $mtime) {
$mtime = $m;
}
}


$etag=(isset($_SERVER['HTTP_IF_NONE_MATCH']))? $_SERVER['HTTP_IF_NONE_MATCH']:"";

$etime = $mtime + 3600; // How long can a cache server save this content without asking again? (1 hour)

if ($etag == md5($mtime)) {
header('HTTP/1.0 304 Not Modified');
die;
}

header("Last-Modified: " . gmdate('D, d M Y H:i:s', $mtime) . ' GMT');
header("Expires: ". gmdate('D, d M Y H:i:s', $etime) . ' GMT');
header("ETag: " . md5($mtime));

Thursday, March 22, 2007

History restored

I currently work on a Web2.0 project for a customer where we use AJAX extensively. We download all of the HTML code and JavaScript from index.html and every part of the application after that are implemented using DHTML and AJAX requests to the server.

To give the user the standard Web experience we are using a JavaScript library called Really Simply History. By using this library we can control that will happen if the user presses the back button in the browser.

The library worked great until IE7 started to appear. Suddenly we got strange "white-outs" of the pages. Then clicking a button or a link all HTML of the page was removed and only two buttons of the UI was left.

After some digging using Google I found a post in Spanish describing that went wrong and a fix. The problem seems to be that dhtmlHistory.js breaks the DOM in some way. I implemented the fix suggested by jorgemaestre and it worked, the whiteouts was removed.

But instead I got other problems. Pressing F5 for a page reload did not work as expected, after hitting F5 the site reloaded but I was taken back to the home page. This means the history function did not work. I did some debugging and discovered that for dhtmlHistory.js to work, the form field values needed to be preserved. I also found another post on the Internet about IE not saving the values of a form if it's created after the page has loaded.

The solution is to let the dhtmlHistory.js create it's form during normal page loading but as the last part of the page, after all other content. To implement the fix:

1) Open the dhtmlHistory.js and remove the last rows saying:


/** Initialize all of our objects now. */
window.historyStorage.init();
window.dhtmlHistory.create();


2) To your main index.html or whatever that's using dhtmlHistory.js add as the last part before </body>


</script>
/** Initialize all of our objects now. */
window.historyStorage.init();
window.dhtmlHistory.create();
</script>

Monday, March 19, 2007

IE7 does not Excel

The statistical module on one of our web applications has a function to download the data directly into Microsoft Excel. This function has been working like a charm since the application was deployed last year, but now we got problem reports from IE7 users that were unable to use the function.

I did a quick test and found no problem, a not so uncommon situation for anyone involved in software development. I thought about reporting the usual "It works in development" back to the customer when I figured I should do another test using the full test environment. I now got the reported failure; IE7 fails to download the file and a popup is shown instead. Aha! Its HTTPS that's causing this.

After some digging around I found the answer in the header() documentation on php.net. To get file downloads to work for IE7 you must add two more headers to the response:


header('Pragma: private');
header('Cache-control: private, must-revalidate');

This will tell IE7 to allow the encrypted file to be saved locally, a requirement to be able to open the file using Excel. I have not found any pointers about why the functionality was changed between IE6 and IE7.

I hope this saves an hour for you!

Friday, March 16, 2007

Hard disk slowfox

I've had some problems for the last weeks with the performance of my laptop and was looking around to buy a new one. But since it's a lot of comparing to do, to decide witch one to buy I was not able to find the perfect one so I started to investigate that was causing this. I ignored the usual helpfull comments about Windows slowing down over time, and questions about when I last reinstalled Windows.

I did the standard scans using Ad-Aware and did not find anything alarming. I removed all unneeded startup items from the registry and the start-up folders and stopped all services I could be without, but the computer still felt like an old 386. I was near or at 100% CPU utilisation without getting much work done, something must be broken.

I downloaded a performance measurement tool called PerformanceTest 6.1 to try to get a measurement of how slow my laptop was, and indeed it was slow. I got 1.2MB/s transfer rate to my hard disk. That is not much. I tried the program on my other machine and got 40MB/s. Now I was convinced something was terrible broken.

After a couple of hours of scanning for rootkits and examining all class filters for the hard disk device stack my brother suggested that I checked the DMA settings on the IDE channel. I said sure, let's check, because DMA is always active and I did not suspect it could be disabled. I was running my hard disk on PIO-mode! How could that be?

After another hour of investigations I had found the Microsoft Knowledge Base article KB817472 that explains the problem. If the driver receives errors from the hard disk, it tries to lower the transfer speed over the IDE channel. The real problem starts then using a laptop and you only hibernate the system since you are likely to get some errors then starting up again. My machine had lowered the transfer mode to the lowers PIO-mode where the CPU is used to output a byte at the time to the hard disk, at high cost of CPU cycles and very slow throughput.

The "fix" to this problem was simple, just delete the Primary IDE channel in device manager and reboot, and the system will plug and play the device back again with restored performance. Running PerformanceTest again gave me 25MB/s to the disk.

Microsoft has also added and option to disable this "feature" in the driver, and I think you can guess my setting...

Thursday, March 15, 2007

LiveUpdate blues

Today i decided to do a backup of my laptop; it had been a while since the last time. So I enabled all the Symantec services and started Norton Ghost to get the backup running. I also decided to check for updates at the same time, just to be sure. I launched LiveUpdate from the Norton Ghost main menu and it checked for updates and stopped with a cryptic:

LU1848: Couldn't create callback object

OK, lets ask Google. I got a lot of hits. But the common recommendation seems to be to uninstall all Symantec software and reinstall it again. A lot of people also recommended to ignore the second step and buy something different.

I thought this was a good opportunity to test out our next product. Application Inspector. Application Inspector is an easy to use merge of regmon and filemon, built for non developers. It works by recording a lot of systems API calls the application does and checks for errors. I started the luall.exe using application inspector and after fixing a couple of bugs (it's still in development) I got a result. LiveUpdate was looking for a COM object with the CLSID of:

{DBBC1D05-9B24-42BE-9AB9-EDFEB039806A}

A quick search using Google this CLSID gave my the faulting Symantec product. It should be pointing toward a SymIDSLU.LUCallback object that was missing from my system. I had noticed before a reference to an IDS program in the list of application that LiveUpdate was trying to update. I think this must be a leftover from an old Norton Internet Security installation that Dell shipped with the computer.

OK, now how to fix this? I had some time, the computer was just doing backup anyway and it was almost to slow to do any real work. I started to poke around among the LiveUpdate files and looked at the configuration files for LiveUpdate. But the Product.Inventory.LiveUpdate file that keeps a list of installed applications that LiveUpdate is supposed to update is encrypted or something, it's not readable. I when found the ProductRegCom_2_6.DLL module and started to look at the TypeLibrary embedded in that file. The API was simple and I soon had a small test program working to dump all the information I needed, and a small fix again I had that faulty application removed from LiveUpdate repository. Nice!

Here is the code:


#include "stdafx.h"

#import "c:\program files\symantec\LiveUpdate\ProductRegCom_2_6.dll" named_guids

using namespace PRODUCTREGCOMLib;

_COM_SMARTPTR_TYPEDEF(IEnumString, __uuidof(IEnumString));

int main(int argc, char* argv[])
{
HRESULT hr;

argc--;
argv++;

hr = CoInitialize(0);
if (FAILED(hr)) {
printf("Failed to init COM\n");
exit(1);
}

IluProductRegPtr p;

hr = CoCreateInstance(CLSID_luProductReg,NULL,CLSCTX_INPROC_SERVER,IID_IluProductReg,(void**)&p);
if (FAILED(hr)) {
printf("Failed to create LiveUpdate Product Reg object\n");
exit(1);
}


if (argc) {
hr = p->DeleteProduct(*argv);
if (FAILED(hr)) {
printf("Failed to delete product\n");
exit(1);
}
return 0;
}

IEnumStringPtr t;

t = p->GetProductMonikerEnum();

LPOLESTR prod;
LPOLESTR prop;

while(t->Next(1,&prod,NULL) == S_OK) {
printf("%S\n",prod);

IEnumStringPtr x;

x = p->GetPropEnum(prod);

while(x->Next(1,&prop,NULL) == S_OK) {
printf(" %S\n",prop);

VARIANT v;

VariantInit(&v);

p->GetProperty(prod,prop,&v);

printf(" %S\n",v.bstrVal);
}
}

return 0;
}