Friday, June 10, 2005

BizTalk... it ain't easy.

I had to interview someone yesterday for a developer role, and the discussion inevitably got round to BizTalk. He was rather argumentative, which made the BizTalk discussion particularly frustrating, as he kept insisting that BizTalk was a very easy product to understand. I think precisely the opposite, since the impact of a BizTalk implementation within an organisation can be extremely complex, and the move to a message-based architecture is significant, to say the least.

I think the confusion lies in Microsoft's classic 'easy-to-use' philosophy, which serves the developer community so well, but which can often lead to people overestimating their own abilities. Yes, creating an XSD in the schema designer is easy. So is creating a map, or even an orchestration. Developing and deploying artefacts into the environment is just drag'n'drop. How hard can it be?

Understanding the underlying architecture and being able to use it to its best advantage and in the most appropriate manner is anything but easy, and to my mind, if you meet someone who thinks implementing BizTalk is simple, they probably don't understand what they're doing.

Friday, June 03, 2005

Connected Systems Competition

I would be tempted to enter this contest, but my day-job seems to be taking up too much time :-S. I think that a SOAP receive adapter, for polling web services, would be a good bet for category #14?

On the subject of the day-job, I obviously haven't posted for a while, and that's largely 'cos I'm so busy. I've spent the last month working at the commercial end of the business, spending 8 hours a day in meetings, and very little time at the sharp-end. I will, however, endeavour to resume posting when (/if) things calm down, as we're working with some pretty cool stuff, and the CSF is part of our roadmap.

Friday, May 06, 2005

Visual disk space manager

Just installed DiskView from Vyooh - which looks like a great visual explorer tool that shows where all that HDD space has gone. It plugs-in to explorer itself, which makes it much more accessible that separate space manager apps.

MSFT -= vapourware

Interesting quote from Rich Turner's blog today:
"Microsoft explicitly does not claim to have a SOA. We do not subscribe to the notions of SOA and do not refer to any of our products as SOA. We do strongly subscribe to the notions of SO as abstract guidance and goals for building distributed systems. We have a broad range of powerful products, technologies and services which can be used to construct sophisticated, powerful, efficient solutions ... but we don't have a SOA because we don't believe such a thing exists, and if it is, its too poorly defined for us to adopt."

Thursday, May 05, 2005

Services and Indigo blog

Just discovered Rich Turner's blog on all things Indigo, including an excellent discussion on the ESB, SOA, ... landscape. Well worth reading.

Thursday, April 28, 2005

Developer-enhancements

I'm trying to work through the "Designing .NET Class Libraries" series, and came across this from the notes accompanying the Rich Type System episode:

"Slide 56: Did anyone happen to use a very early, pre-beta release of the .NET Framework and actually see the PrinterOnFireException? I would be interested to hear if anyone actually spotted this..."

It reminded me of something I once saw this on an internal functional requirements document: "System back door for blackmail ... in case we all get ripped off"

Monday, April 25, 2005

Enterprise Library

I realise I'm a bit slow on this one, but the patterns and practices group have released all of their application blocks in a single download - the Enterprise Library. I'm not sold on all of it (e.g. I'm a log4net man myself), but the download now includes QuickStarts and full documentation, making it more of an SDK than simply a set of useful components.

Latest BizTalk Bloggers Guide

Get it here.

Thursday, April 14, 2005

All change

I've started a new job this week, for a startup working in the Windows Media world.

As an early introduction to the music business, I spent this afternoon in a meeting with a major record company where I had to sit on a spacehopper!

Tuesday, April 12, 2005

BizTalk performance aticle

I was lucky enough to work with Wayne Clark on the BizTalk JDP a while back, and so can appreciate that he definitely knows his stuff, making anything he posts worth reading. This is an excellent article on the internals of the BizTalk engine, and how it handles high loads.

Sunday, April 10, 2005

Flickr

Apologies for the formatting - not quite sure what's going on here...




















I've started using Flickr to store my photos online. It's still in beta, and a little buggy, but it's a fantastic Flash application - a genuine rich-client experience. This is a pic of the Aiguille de Midi - the starting point of the Vallee Blanche ski run.
This is a shot from the Refuge du Requin, about half-way down the Vallee Blanche.

A shot from the top of the Grand Montets. It's hard to
see in this thumbnail, but this is the start of the Haute Route, and if you look hard, you can see a series of zig-zag tracks making their way up and out of the valley, towards Zermatt.



Finally, some moguls, for those who like that sort of thing.

Wednesday, April 06, 2005

Concurrent connections when consuming web services

Think I may have mentioned this before - but check out this post from Darren Jefford re. the low-level throttling of web requests when consuming web services from BizTalk.

Monday, April 04, 2005

Unit testing and data access

Something that has come up in conversation recently wrt unit testing is the subject of how to unit test complex code fragments that use data persistence code, without having to interfere with source databases / reset data etc. It would be great to be able to test the code with 'static' data that is known to be correct (or incorrect if testing exceptions) that passes through the DAL, but does not require 'real' data access (e.g. you might want to unit test a business component method that internally calls the GetMyObject() method).

My current DAL technique of choice turns out to have a nice side effect in making this possible. I tend to start development by creating an abstract data layer with no concrete implementation. I then use a factory class to create a specific DAL object using dynamic class loading. This way I can create multiple DALs and switch them in and out using config values, meaning that I can use a static file-based DAL for unit testing, and then switch in the real SQL DAL class during end-to-end testing.

It's served me extremely well over the past year, and I now tend to do all my development by starting off with simple Xml based DALs for proof-of-concept work, before moving across to SQL versions, without having to change the business code. (Just for reference, the reason I did this in the first place was to decouple the development of the business components from the DAL components on a project - I wanted to finish the business logic whilst I was deciding whether I wanted to try developing with MySQL rather than SQL Server.)

Of course, I don't actually do much coding these days, so this may all be a bit old-hat. I'd also favour using an O/R tool these days (e.g. Genome) for data access.

e.g.
public abstract class CustomerManager
{
public CustomerManager(){}

// this logic presents client classes with a single method
// for updating / creating records, another personal favourite.
public void SaveCustomer(Customer customer)
{
if(customer.Id == -1)
CreateCustomer(customer);
else
UpdateCustomer(customer);
}

protected abstract int CreateCustomer(Customer customer);

protected abstract void UpdateCustomer(Customer customer);

public abstract Customer GetCustomer(int id);
}

public class XmlCustomerManager: CustomerManager
{
public CustomerManager(): base(){}

protected override int CreateCustomer(Customer customer){...}

protected override void UpdateCustomer(Customer customer){...}

public override Customer GetCustomer(int id){...}
}

public class CustomerManagerFactory
{
public static CustomerManager CreateCustomerManager()
{
Assembly assembly = Assembly.Load(AssemblyName, Version, Culture, PublicKeyToken);
object o = assembly.CreateInstance("XmlCustomerManager"); return (CustomerManager)o;
}
}

Thursday, March 31, 2005

XML Explicit and SQL receive adapter

Following on from my theme of 'hacking' around the limitations of the various wizards / auto-generated artefacts in BizTalk, here's an excellent posting on how to consume xml produced from SQL Server using XML EXPLICIT rather than XML AUTO.10:22 AM 3/31/2005

Tuesday, March 29, 2005

Wednesday, March 16, 2005

Ultra-hydroxy-lipo-activase III (TM)

I've long been fascinated by cod science, and no industry on earth understands the power of a completely made up word as well as Cosmetics. Here a few gems to get you going. Feel free to add any you come across.

  • Boswelox

  • Retin-ox Correxion

  • BioSync Activating™ Complex

  • Radiance Hydra Cell Vectors

Monday, March 14, 2005

BizTalk Primer

People keep on asking how they can get started with BizTalk, and so a big thanks to Luke Nyswonger for putting together a great collection of resources, along with a study plan.

Sunday, March 13, 2005

BizTalk Configuration Documenter

As promised, a few thoughts on the above utility.

First impressions are great. It produces a fantastic-looking output, using the "Description" metadata properties of orchestrations and ports to provide the detail. It even includes the diagrams of the orchestrations - how does it do that (same way as the debugger and tracking and profile editor I guess - is it some graphics API)?

I have a couple of gripes - one being that it documents external web reference schema, which can swamp the documentation with content that is not strictly relevant.

On a broader note, it provides the sort of documentation that is actually useful. If you extend the raw output by adding in custom HTML introduction pages you can generate documentation that would make any SDK developer proud. My friend and ex-colleague James, as a certified SCRUM Master, would I'm sure agree that less documentation is more, and specifically that documentation generated from source code (or source diagrams, in the BizTalk realm) is the most powerful technical documentation of all. Functional specifications are of course extremely valuable in defining how an application should behave, but should be written without regard to the implemented technology, IMHO.

I am fed up with a.) reading, and b.) being asked to write, documentation that includes database table desciptions in Word documents. They are worthless, and invariably obselete the minute they are committed to paper.

People who need to know about a database schema should be able to read a schema diagram, ditto object models. Most source applications (e.g. Oracle, SQL Server, VS.NET, Java) can now be documented using tools that use introspection to determine datatypes, interrelationships etc. The missing information, namely a broader description of the 'problem domain' that the code addresses, can be added in a separate, simple, overview document. And if the documentation tool produces compiled HTML help files (as this utility does), then this overview documentation can be integrated with the technical documentation, if that's desirable.

Let's just have a more intelligent approach to documentation, and specifically an acknowledgement that detailed technical documentation serves a very different purpose from its functional / project proposal equivalents.

One more thing

When using the Sql adapter to receive batches of data, a couple of handy hints:

1. Use a stored procedure. It allows you more control and security, and abstracts the underlying table structure.

2. Always add a parameter that allows you to restrict the size of the output (using SET ROWCOUNT). This allows you to control the flow of input messages - if there's a chance that your sproc is going to pick up 10,000 records, it's probably going to be easier to control if you poll for 100 messages every x seconds, rather than attempting to take the whole lot in one go.

Hacking files in BizTalk

The previous post on Sql debatching mentioned 'hacking' auto-generated files, something that may alarm some people (though not many, if my experience of developers is anything to go by.)

Another example of this is the extraction of primitive values from auto-generated schemas, specifically (in my case) consumed web service responses. As an example, say that you consume a web service that returns a single value (acknowledgement id, boolean flag indicating that an update went through, you know the sort of thing.) Because these values are contained within the auto-generated reference.xsd files they are neither promoted (in fact they couldn't be promoted, as only passthru pipelines are supported for SOAP calls) nor distinguished, and therefore not immediately accessible within expression or assignment shapes.

I think the documentation, if there were any, would suggest using the xpath function to extract the values, but being lazy I find it easier to simply find the relevant reference.xsd schema definition and make the propert distinguished just as you would with your own schema. The danger with this is that the distinguished property status is lost if you refresh the web reference.

However, and this to me is the distinction between good and bad practice, this would cause a compile-time error, and would therefore have to be fixed before being redeployed. It'll never cause a runtime exception, and so this does not strike me as bad practice. It's just a shortcut, which in my book is a Good Thing.

Debatching Sql receive adapter resultsets

I spent Friday with a client advising them on some basic design principles, specifically around the use of the Sql receive adapter. It was a fairly simple scenario - extracting data from a SQL Server database, then processing the output messages with some very simple orchestrations. It seemed fairly obvious to me that the easiest way to process the data was to split the output from the receive location, then process each record individually, which is where it started to go awry.

Problem I
The easiest way to split out messages from a batch is to use an envelope schema within a receive port. However, if you create a schema using the SQL adapter wizard (as most people do when using the SQL adapter), the resulting auto-generated schema is not an envelope.
Solution: the solution to this is to map this schema to one that is an envelope.

Problem II
Maps are executed after receive pipelines; if you put the map in the receive port, by the time the map is executed, it is too late to debatch, and you would end up with the envelope in the messagebox, rather than the split documents.
Solution: use a loopback send port to get the envelope back into a receive port for debatching.

Problem III
As per the discussion here, it is not possible to map to an envelope schema in a send or receive port (fixed post-SP1?).
Solution: pass the original batch message into an orchestration, map to the envelope, then use a loopback to get the split messages into the messagebox.

Phew.

By now, the proposed solution was so complicated that, although it worked, it was hard to persuade anyone that it represented a 'best practice'.

A day on the train and a bit of free thinking time got me to the better solution - hacking the auto-generated schema.

Although the generated sql schema isn't initially marked as an envelope, there is nothing to prevent you from doing just that. This means that the direct output of the sql receive location will be split in the pipeline, and that all of the individual messages will be arrive at the messagebox. As a note, the documents that are split out conform to a partial schema, and so if you want to use these, or better still, map them to some common canonical form, then you will need to create a schema with the same namespace that matches this portion of the original sql schema. (If you do decide to do this, remember to mark each element 'Form' as "Qualified".)

I have a sample project that is available on request (I have nowhere to host files, so I'll have to email it out I'm afraid) that selects first and last names from a source table, debatches the output and then maps each document message to a canonical Fullname message. Two separate orchestrations then subscribe to the Fullname message, one that extracts the firstname and inserts into a second table, and another that does the same with the lastname. The sample includes the BizTalk project, sql scripts and a binding file.

STOP PRESS: after three years, I've finally gotten around to uploading the solution for all those who are interested. It's on Codeplex - http://www.codeplex.com/biztalkdebatch

Tuesday, March 01, 2005

Rules engine and repeated element facts

Thanks to Stephen Kaufman for posting on the subject of repeated element fact assertion within the rules engine - admirably filling yet another hole in the available documentation.

Monday, February 28, 2005

Rules Engine III

Thanks to Jon Flanders at Developmentor for answering my Rule Engine query. The answer is to use the "update" action to reassert Fact2 once it has been updated, and to then add in a second predicate to the first rule to prevent the policy from entering a recursive loop.

So:
Rule1: If Fact1 == Y and Fact2 != Y then Fact2 = Y; Update Fact2;
Rule2: If Fact2 == Y then Fact3 = Y;

There are a couple of issues that I have with this:

1. The need to include the Fact2 != Y predicate explicitly seems incorrect to me - I would have thought that the rules engine's evaluation and agenda building should have worked that out itself, and prevented the loop?

2. The above concern, together with the need to explicitly reassert Fact2 mean that rule creation has just crossed a critical boundary - and I challenge anyone to find me a business analyst who would feel comfortable with this scenario. All the vocabularies in the world will not make rule creation a BA-friendly activity if they are required to understand the internal workings of the rule engine.

Data deletion

There's been a bunch of articles in the press recently about privacy concerns, data being left on old computers, etc, etc. Most of these have mentioned expensive 'cleaning' tools for wiping hard drive contents, and they often mention the need to overwrite empty segments a number of times to garuantee their deletion.

There is a DOS command that will do this for you - I *think* it's post SP1 for XP. It's called cipher, and it overwrites blank segments three times - first with 0, then with 1, then with random numbers. Use the "/w" option.

Thursday, February 24, 2005

Snowy weather

The prevailing weather conditions (snow, snow, sleet, sleet, snow) have coincided with the arrival of the equipment list for my next holiday.
It includes snow shovel, avalanche probe and transceiver!

Rules Engine investigations

This is a copy of my recent posting to the newsgroups, which I'm posting here in case anyone has a simple answer to what I believe is a simple question...

------------------------------------

I have created a simple policy of two rules, that operate on a single TypedXmlDocument:

<ns0:Root xmlns:ns0="http://RulesEngine.Schema1">
<Fact1>Y</Fact1>
<Fact2>N</Fact2>
<Fact3>N</Fact3>
</ns0:Root>

I want this to work in such a fashion that when testing the rule with the aforementioned sample, I end up with a message with all facts = Y.

My rules are:
1. If Fact1 = Y then set Fact2 = Y
2. If Fact2 = Y then set Fact3 = Y


Stephon Mohr's chapter on the subject states:
"When the available facts exist to evaluate a rule's condition and it evaluates to true, the rule is fired. As rules fire, new facts are asserted into memory. For example, firing a rule might cause a field to be set in a message. This becomes a new fact in memory, which may cause another rule to fire. This continues until no more rules can fire."

I tested the policy, and got the following results:

RULE ENGINE TRACE for RULESET: Policy1 24/02/2005 15:12:56

FACT ACTIVITY 24/02/2005 15:12:56
Rule Engine Instance Identifier: d54d5da0-b88a-4e94-ba09-88932c880c92
Ruleset Name: Policy1
Operation: Assert
Object Type: TypedXmlDocument:Schema1
Object Instance Identifier: 735

FACT ACTIVITY 24/02/2005 15:12:56
Rule Engine Instance Identifier: d54d5da0-b88a-4e94-ba09-88932c880c92
Ruleset Name: Policy1
Operation: Assert
Object Type: TypedXmlDocument:Schema1:/Root
Object Instance Identifier: 725

CONDITION EVALUATION TEST (MATCH) 24/02/2005 15:12:56
Rule Engine Instance Identifier: d54d5da0-b88a-4e94-ba09-88932c880c92
Ruleset Name: Policy1
Test Expression: TypedXmlDocument:Schema1:/Root.Fact1 == Y
Left Operand Value: Y
Right Operand Value: Y
Test Result: True

AGENDA UPDATE 24/02/2005 15:12:56
Rule Engine Instance Identifier: d54d5da0-b88a-4e94-ba09-88932c880c92
Ruleset Name: Policy1
Operation: Add
Rule Name: Rule1
Conflict Resolution Criteria: 0

CONDITION EVALUATION TEST (MATCH) 24/02/2005 15:12:56
Rule Engine Instance Identifier: d54d5da0-b88a-4e94-ba09-88932c880c92
Ruleset Name: Policy1
Test Expression: TypedXmlDocument:Schema1:/Root.Fact2 == Y
Left Operand Value: N
Right Operand Value: Y
Test Result: False

RULE FIRED 24/02/2005 15:12:56
Rule Engine Instance Identifier: d54d5da0-b88a-4e94-ba09-88932c880c92
Ruleset Name: Policy1
Rule Name: Rule1
Conflict Resolution Criteria: 0

FACT ACTIVITY 24/02/2005 15:12:56
Rule Engine Instance Identifier: d54d5da0-b88a-4e94-ba09-88932c880c92
Ruleset Name: Policy1
Operation: Retract
Object Type: TypedXmlDocument:Schema1
Object Instance Identifier: 735

FACT ACTIVITY 24/02/2005 15:12:56
Rule Engine Instance Identifier: d54d5da0-b88a-4e94-ba09-88932c880c92
Ruleset Name: Policy1
Operation: Retract
Object Type: TypedXmlDocument:Schema1:/Root
Object Instance Identifier: 725

-------------------------------------------------------

According to my reading of forward chaining, Rule 1 should have been fired, changing the value of Fact2 to 'Y', which should have caused Fact2 to be reasserted, which in turn would cause Rule 2 to be fired on the second pass, resulting in Fact 3 being set to Y.

I also don't understand why the output shows the facts that are asserted and retracted are given as:

Object Type: TypedXmlDocument:Schema1
Object Type: TypedXmlDocument:Schema1:/Root

What about the actual elements Fact1 and Fact2 - aren't they are what I'm evaluating?

Rules Engines

I've been trying to get to grips with the Rules Engine today, and having some difficulty getting my head around the world of Forward Chaining Inference Engines, which is apparently what the BizTalk rules engine implements. Or not...

My investigations led me to this extraordinary posting, which has opened my eyes to a whole world I never knew existed. Both Scott Woodgate and Stephen Mohr (of BizTalk Unleashed fame) get involved, and Mr. Lin has firmly established his reputation as the web's greatest pedant.

(Not for the faint-hearted.)

Friday, February 18, 2005

Zermatt

In keeping with my new year's resolutions 3 & 4, I've just come back from a week in Zermatt. Skiing was fantastic, snow in great condition, weather so-so, and all-in-all a great break.

Zermatt is so picturesque it feels like a film set, and although the skiing isn't so extensive, it was more than enough for 5 days. We even managed a grand dinner in celebration of the 140th anniversary of Edward Whymper's first ascent of the Matterhorn, his account of which is a must-read for anyone interested in Alpine mountaineering.

I now have 5 weeks to get in shape for the next one :-)

Monday, February 07, 2005

If...Then...Else

A common operation within maps is to inject a default value into the output schema if the the input field is blank / missing, otherwise just copy over the value.

This is a bit of a nightmare using the standard Logical Existance functoid, and is much more easily accomplished using inline C#. Simply added a script functoid, connect the input field as the first parameter, add a second parameter set to the default value, then use the following function:

public string IfThenElse(string param1, string param2)
{
return (param1 == null || param1.Length == 0) ? param2 : param1;
}

Then simply connect the output of this script functoid to your output field.

Thursday, February 03, 2005

BizTalk PowerToys

There's an excellent list of downloadable BizTalk utilities here. I'm particularly interested in the BizTalk Configuration Documenter, and I'll report back on my findings here.

Social Hacking

Quite elegant piece of social hacking here - and from an 11 year old!

Wednesday, February 02, 2005

New msn.com

When I first registered XMLSpy (3.5) it asked me whether all HTML files would be XHTML compliant. That was about 5 years ago, so it's taken some time, but I can now confirm that, slashdot moans aside, msn.com is at least now valid XML.

I predict minor upturn in screen-scraping as an integration technique.

[/. isn't btw]

Tuesday, February 01, 2005

Delivery notifications

A couple of links from Kevin Smith's blog re. delivery notifications:

TimeSpan.Parse() and the Delay shape

When using configurable delays (defined in the .config), I began by assuming that we would always implement delays in days, as we were working with 6 week schedules. I was then asked if we could compress 6 weeks of schedule into a single day, for testing, meaning that I now had to specify the delay in hours. I then spent some time coming up with a clever way to parse config strings to provide maximum flexibility (e.g. to allow delays of days, or seconds, and everything in between.)

It then twigged that the TimeSpan class probably already has a Parse() method (as DateTime does.) It does, and it accepts all manner of useful input.

Details of the input string format here.

Orchestration Scheduler

A nifty way of forcing activation of an orchestration at a given time is to use the SQL adapter to poll a table, in combination with a service window that restricts the poller to fire only once per day.

Create a table that contains any information you might want in your orchestration (i.e. the command message data), then create a SQL receive location that polls this table once a day, and set the service window to open at the time you want the orchestration to fire.

If you set the service window to start and stop at the same time, it is actually open for one minute - the start opens the window at hh:mm:00 and the stop ends the window at hh:mm:59, so as long as you set the poller frequency to > 1 minute, it will only fire once per day.)

You can put all sorts of information in the table, and build up quite complex schedules, even forcing multiple orchestrations to start thereby implementing a multi-threaded scheduler. This also allows you to keep track of the last time a poller was started. It's potentially quite a powerful technique.

Suspended Queue Listener

I keep on losing track of other people's useful posts, so I'm going to start keeping them here.

There is a WMI event that is fired when messages are suspended - see here for a full implementation of a suspended queue listener from Martijn Hoogendoorn.

BizTiVo

Further notes on the subscription issue - the more I read on the newsgroups, the more I appreciate how important the pub-sub model underlying BTS is to understanding problems with messaging.
I spent a day with some developers just starting out with BTS last week, and I found that showing the SubscriptionViewer application whilst enlisting and unenlisting orchestrations and send ports was invaluable in attempting to explain BizTalk's murky interior.

This is an extract from a recent posting to a query about messages being suspended (unresumable) when the matching CBR send port is unenlisted, and why they won't resume when the port is re-enlisted.

(Ok, so it's my own answer, but if I can't quote myself on my own blog, where can I?)

"The subscription required for the message to be processed is created when the [send] port is enlisted. When the receive location picks up the message, it is delivered to the messagebox, where the message agent runs through the subscription table to see if there are any services subscribing to the message.

If not, it is marked as suspended (unresumable). If there is a subscriber, but it is stopped, then the messages is suspended (resumable.) Restart the send port, and the message should clear.

If the send port was not enlisted, then there will have been no available subscriber for the message when it arrived, so BizTalk cannot process it. It is actually quite logical to suspend these messages and terminate the pipeline, as the alternative would require BizTalk to save every single message it ever received, on the premise that someone might at some point want to subscribe to it.

It's a bit like TiVo - you can pause live tv, but you can't go back to a show that was on last week if you didn't record it!"

BizTalk Configuration

Another one for the bookmarks. Configuration is an issue with BTS because of the distributed nature of the product, and the management problem of having multiple conifg files. We've experimented with both Rules and config files, and whilst the former works extremely well, we've actually settled on the latter (we only have 2 boxes in production, so deployment is less of an issue for us).

Either way, this is a good discussion of the problem.

Monday, January 24, 2005

Naming Conventions

Good presentation by Brad Abrams about naming conventions - nothing new, but it's all spelled out in detail, types, methods, parameters, properties etc. along with the most important point (IMHO) - that naming conventions only apply to externally visible entities (i.e. public and protected.)
Private names are left to the developer / development team.

Sql Adapter and char(1)

There appears to be a bug in the SQL adapter wizard that prevents the creation of receive port schemas if the undelying SQL datatype is char(1).

Balaji Thiagarajan has posted a workaround here.

Saturday, January 22, 2005

Picasa 2.0

I've been playing with Picasa 2.0 recently, and it's fantastic. It has all the look and feel of a Mac program, and is definitely worth checking out, if you store any photos on your computer.


Picasa contact sheet collage.



Picasa picture pile collage.

Tuesday, January 18, 2005

Mini-mac economics

This was posted before the mac conference, following on from the Think Secret story. I've lost the link to the original article, but the quote is quite interesting:

"We might see that as early as next week with the rumored introduction of an el-cheapo Mac without a display. The price for that box is supposed to be $499, which would give customers a box with processor, disk, memory, and OS into which you plug your current display, keyboard, and mouse. Given that this sounds a lot like AMD's new Personal Internet Communicator, which will sell for $185, there is probably plenty of profit left for Apple in a $499 price. But what if they priced it at $399 or even $349? Now make it $249, where I calculate they'd be losing $100 per unit. At $100 per unit, how many little Macs could they sell if Jobs is willing to spend $1 billion? TEN MILLION and Apple suddenly becomes the world's number one PC company. Think of it as a non-mobile iPod with computing capability. Think of the music sales it could spawn. Think of the iPod sales it would hurt (zero, because of the lack of mobility). Think of the more expensive Mac sales it would hurt (zero, because a Mac loyalist would only be interested in using this box as an EXTRA computer they would otherwise not have bought). Think of the extra application sales it would generate and especially the OS upgrade sales, which alone could pay back that $100. Think of the impact it would have on Windows sales (minus 10 million units). And if it doesn't work, Steve will still have $5 billion in cash with no measurable negative impact on the company. I think he'll do it."

The [ACID] transaction is dead; long live the [compensating] transaction

For what seems like forever, developers have learnt about ACID transactions and the "two-phase commit". Like database normalisation, it's just something that everyone needs to know. The "Hello World" of transactions is the bank transfer. If you move money between two accounts, then the debit from the source and the credit to the destination must be completed as an ACID transaction - otherwise someone loses out. Fact.

The birth of the message-centric, asynchronous, loosely-coupled (any more buzzwords?) "service" model has serious implications for the traditional transaction, as it has become impossible to coordinate transactions across service boundaries, over long periods of time. Hence the concept of the "compensated" transaction. If your action fails, then use some kind of compensating process to undo whatever you'd started. (This is easier said than done - as someone once pointed out, if the initial process launches a missile, compensating for it is tricky.)

I now have incontrovertable proof that the two-phase commit is dead; long live the compensating transaction.

I have seen a number of strange payments over recent months from my mortgage company to my current account, matching the amount that I pay them at the beginning of every month. The mortgage company couldn't find any standing order, and once I'd convinced myself that they weren't simply giving me the money, I investigated further. It turns out that I'd not updated the bank account details when I changed mortgage, and that my monthly debit was being paid to a non-existant bank account. However, rather than reject the payment, raise an alert to someone, and sort the problem out, their systems were simply repaying the money about 7 days after the initial payment. This had been going on for about 6 months!

God alone knows where the money went in the missing days, as no one took responsibility for it (and were not therefore earning interest on it.) I also incurred an overdraft penalty during that period. Since I had expected the debit transfer to occur in any case, I don't really have an argument for personal compensation on this one, but it raises an interesting point!

Avian Carrier Standard

It's an oldie, but still worth reading:

http://www.ietf.org/rfc/rfc1149.txt?number=1149

Somewhere I've read a document on the use of a pick-up truck and DAT tapes to achieve ultra-high bandwidth data transfers, but I can't seem to find it. A beer for anyone who can find the link...

Virtually perfect?

I'm thinking of rebuilding my machine to the bare minimum, installing VMWare (or V'Server), and basically running the whole time as a virtual machine. The question is whether my machine is powerful enough to run VS.NET within a virtual machine, and whether V'ware is reliable enough?
Surely the time has nearly come when all machines run as virtual machines within a shell OS container? Storage is so cheap that you can backup entire images, giving instant rebuilds. You could even keep your own image on a DVD (well, almost), and take it with you.

Which is, I guess, where this is going? It's an interesting concept - you could run a series of locked down OS for different purposes. As a contractor this is particularly interesting - clients could issue an OS image that would allow someone to VPN / access corporate email, etc.

You could become a virtual employee of a number of different client companies, and if they wanted a day or two's work, you could just boot up the Xyz plc image, and you're already there. It could happen. Really.

On next week's show... Hugo commutes to work in his personal rocket ship. Before travelling back in time to fix his grades, then spending the weekend on Titan.

Regenerate BizTalk [web reference] Files

Occasionally I find myself updating web references, to no avail. VS.NET 'says' it's updating, but nothing seems to happen (the reference*.xsd files are unchanged). I used to get around this by deleting and re-referencing the web service, which is a pain.

Turns out that in addition to the "Update Web Reference" option when selecting the reference, there is a "Regenerate BizTalk Files" context-menu option if you select the Reference.map file beneath the reference itself. This does the trick. :-)

I'm guessing that the update simply changes the .disco / .wsdl files, whilst the regenerate option change the reference*.xsd files.

Scroogle

This is actually quite interesting:

"These [search] engines crawl the public web without asking permission, and cache and reproduce the content without asking permission, and then use this information as a carrier for ads that generate private profit."

http://www.scroogle.org/gscrape.html

2005 Predictions

It's a bit late, and they're not mine, but here are some interesting predictions for the coming year. http://www.pbs.org/cringely/pulpit/pulpit20050107.html

Thursday, January 06, 2005

Objects of desire

1. I'm warming up to buy a Hush E-Series media server. I know I could build one myself for about a 1/5 of the price, but it just wouldn't be the same. (No dual digital TV receiver though :-( )

2. I'm thinking about a Nikon D70 to replace my old Nikon SLR, which broke whilst on holiday last year (if anyone from Conchango HR is reading this - no, no one ever called me back about my valid insurance claim!)

3. After my earlier rant about Panasonic's ability to produce a piece of consumer electronics that would look ugly in a server farmer, Apple have gone and produced a top-end server that wouldn't look out of place in your living room. The Apple "Switch" page is now in my favourites...

Sunday, January 02, 2005

Happy New Year

Happy new year everyone - I for one am hoping for a bit more stability in 2005 - 2004 included 3 different jobs, and 2 months without work, which is not so good for the nerves.

2005 is the year of:

1. The Digital Home - I've been talking about getting properly wired (or wireless) for a while now, and even thought about setting up my own business in this area about 18 months ago (as I was finding it so difficult to get any useful, impartial, advice), and I could do with upgrading my existing TV, which has recently decided to revert to black & white pictures only.




In addition to the Sky+, DVD-R / PVR, Apple iTV, Windows MCE options, here are another couple to look out for:


2. Get fitter, and pref. stop commuting. Travelling to work on the train for much of 2004 has proved my undoing.

3. Go on more holidays. I don't have enough of them, and I never plan them in advance.

4. Ski more. I love it, and don't do nearly enough. No new contracts in Feb / March thank you :-)

Have a great year everyone.