Thursday, October 28, 2004

Orchestration debug info

Wouldn't it be nice to be able to use the Console.Write() function within an orchestration shape to view internal message and variable values, without going through the orchestration debugger.

Turns out you can... using DebugView from SysInternals.

If you use the System.Diagnostics.Trace.Write function within expression shape, the output is picked up by DebugView.

Tuesday, October 26, 2004

Business intelligence

One of the business processes I'm modelling at the moment includes the following scenario:
A message, type MessageType1, is received by an orchestration, which publishes it via a send port, then listens for a message, type MessageType2, which correlates with the first message using some common attribute. Pretty simply stuff.

The only fly in the ointment is that occassionally messages of MessageType2 will arrive at the receive location without a corresponding MessageType1 match. These messages would normally be suspended, as messages "with no matching subscription", but in our scenario we need to process these 'unmatched' messages using a separate orchestration. This new orchestration therefore has an activate receive shape that subscribes to MessageType2 messages. However, this would pick up every MessageType2 message, including those that are also picked up by the convoy, which we don't want.

A potential solution is to mark the unmatched messages in advance, using some attribute that we can filter by. The problem with this is that the business were not making any distinction between the matched and unmatched messages at the point of creation?

The answer was to go back to the business to better understand why some messages will be correlated, and others won't, which in turn forced the business to look at the process in more depth. This proved to be a very useful lesson for all concerned, and coincidentally helped to clarify a number of other issues with the overall solution.

Another example of the need for a close relationship between BAs and designers at the earliest stages of solution design.

Sunday, October 24, 2004

Friday, October 22, 2004

Client relations

Another gem from TheDailyWTF (abridged) :

(Developer) So how do we determine the status on an order?


(Client) Look at the field called "status" on the order.

Does every order have a status field?

Yes, every order has a status field.

Absolutely, positively, every order? So if one is missing it is a user error and we do not have to process it?

That's right, you should always have a status field.

- 3 months later -

Hey, the system isn’t processing some orders. Go fix it.

- 2 hours later -

There is no status field on the order. You said that would never happen.

Oh, but this is remote call forwarding, that has no status field.

But you said that every order will have a status, we wrote this down, now you are telling us it does not?

Every order DOES have a status, just not the remote call forwarding ones.

You keep using that word, I don’t think it means what you think it does...

Service instance lifecycles

Found this in an article about BPEL (albeit from 2002):

"Web services implemented as BPEL4WS processes have an instanced life cycle model. That is, a client of these services always interacts with a specific instance of the service (process). So how does the client create an instance of the service?

Unlike traditional distributed object systems, in BPELWS instances are not created via a factory pattern. Instead, instances in BPEL4WS are created implicitly when messages arrive for the service. That is, instances are identified not by an explicit "instance ID" concept, but by some key fields within data messages. For example, if the process represents an order fulfillment system, the invoice number could be the "key" field to identify the specific instance involved with the interaction. Thus, if a matching instance isn't available when a message arrives at a "startable" point in the process, a new instance is automatically created and associated with the key data found in the message. Messages can only be accepted at non-startable points in a process after a suitable instance has been located; that is, in these cases the messages are in fact always delivered to specific instances. In BPEL4WS, the process of finding a suitable instance or creating one if necessary is called message correlation."

Snippet Compiler

One of my biggest annoyances with designing orchestrations in BizTalk is how you test expressions. As an example, this morning I've been working with Timespans, and wanted to know:
  • whether you could have a negative timespan (by subtracting the current datetime from a date in the past)
  • what happens if you put a negative timespan into a delay shape.
I decided I needed to run a few lines to sanity-check what I was trying to do, using the following code:
// to generate a negative timespan value subtract a date from an earlier date.
System.DateTime deadline = System.DateTime.Now;
System.Threading.Thread.Sleep(1000);
System.TimeSpan timespan = deadline.Subtract(System.DateTime.Now);
// what is the expected value of timespan - can it be negative, and if so - what happens now:
System.Threading.Thread.Sleep(timespan);

I didn't want to open a new console project to work this out, so I dug around and came across SnippetCompiler (a colleague pointed me at it). This allows you to test small snippets without going through the whole "new...project" routine, and looks like becoming invaluable.
I do have a couple of gripes with the AutoComplete / IntelliSense functionality, but it's probably unreasonable to complain when it's free ;-) .

Synchronicity part 2

The synchronisation experiment has not been a great success. As many might have predicted events and contacts are uploaded / downloaded in some apparently random manner, causing everyone's birthdays to be duplicated every time I sync Plaxo, and contacts on my phone to be deleted by ActiveSync. It's infuriating that this problem still plagues PIM software.

Anyone I've met in the last three months has been wiped, including the electrician who's currently rewiring my flat, in my absence!

I do however, have a fairly complete set of info on Plaxo, sync'd to my home and work desktops. Adding in the Smartphone causes a few headaches, but things are definitely better than they were.

Thursday, October 21, 2004

C500 update

The first ROM upgrade for the C500 is here already. Don't forget to save photos onto the storage card, or computer, otherwise they'll get zapped by the hard reset.

(BTW if you want something a little more permanent, send your pics to Stickpix via MMS [07746197446], and they'll print them out and send them to you. FREE for a limited period.)

Alice in Wonderland

One of my main concerns with our current design is the requirement for an alert to be triggered if no message of a given type is received before some given deadline (with the deadline being message-instance specific). This is difficult to accomplish on a message-event basis, as orchestration instances are instantiated by the arrival of a message. An instance cannot be aware of the non-existance of a message before it arrives, as it doesn't exist itself! (I'm sure something similar appeared as a plotline in The Hitchhiker's Guide to the Galaxy.)

This means that the orchestration instance, that would ordinarily activate upon receipt of a message, must exist already, before the message arrives. This is achieved by seeding an orchestration instance with a "message-expected" message that includes the deadline date, and for it to then listen for the expected message (even worse - it might actually have to poll for the message itself).

This seems completely counter-intuitive to the message-driven architecture that BTS espouses; an added complication is that once the instance is running, the deadline to which it is bound is fixed - you can't go in and tweak it, unless the orchestration is designed in such a way that it can receive (a la "sequential uniform convoy") further updates to its own deadline, by which time the orchestration is so complex that it would have been easier to shift the scheduling to a separate external application. This is particularly relevant in the current situation as the deadline might be 4 months from the time of the activation message, giving more than enough time for real-world events to require a change in the schedule.

So, we're now in a situation where BizTalk itself is managing the receipt of messages - it has to find out what messages are expected, create a new orchestration instance to listen for each one, and raise an alert if they don't arrive on time.

Et voila. Like some great illusion, we've managed to turn BizTalk inside-out, and make it the application.

Daily WTF

From today's Daily WTF:

///TODO: add so that it actually does something with orderPlanWeekId
///TODO: Maybe I don't need to, try to understand what the above TODO was for


http://thedailywtf.com/archive/2004/10/20/2763.aspx

It's exactly the sort of comment I find myself writing :-(

Spirit of "Martian"

I don't really understand what these guys are selling, but I like it anyway. Anyone who actively promotes a reduction in "the impact of visual mediocrity on the quality of life of those who use computers" gets my vote.

If only it was wireless...

Wednesday, October 20, 2004

Google go-slow

Don't know about anyone else, but the Google desktop search engine causes standard google searches to grind to a halt, even when preferences are set to ignore desktop matches?

Tuesday, October 19, 2004

Date-dependent convoys

The project I'm currently working is very heavily driven by various external system dates, and we've been having exhaustive discussions on how to implement the schedule - using BizTalk itself, using an external "scheduler" application, or relying on the source data systems to provide data at the requisite time. (My argument has always been the BTS is the wrong place to have any scheduling, as BTS is message-driven, and should simply react to what its been given, however I'm fighting a one-man war on that front!)


There are two types of scheduled process involved:
1. Data must be pulled (itself something I don't like) from systems at a given date.
2. Data will be pushed into BTS when it's made available, but cannot then be sent on to the consumers until some nominal "start date" has been passed. Furthermore, if a nominal "end date" has also been passed then the message should be killed off entirely. The final issue is that of updates. If a message is submitted before the start date, updates to the data contained in the original message could also be submitted via the same channel, in which case the final update is the one that the consumer is interested in (i.e. managing duplicates internally).

I may well post more about this scheduling business, as it's something I haven't directly come across before, and it raises some fairly fundamental issues re. message-driven "real-time" architectures (and how appropriate they are in such circumstances)

In the meantime I've done a simple demo that might be of interest. I've modelled the second scenario, where messages are submitted and then held, using a sequential uniform convoy to suck up all messages and correlate them; since I've done a quick demo I thought I might as well post it here in case anyone else finds it useful.

The zip file contains the following BTS artefacts:
1. A message schema, containing the aforementioned start and end dates, together with a field to use for correlation, and a data field that can be used to verify that the latest message is the one that comes out of the orchestration.
2. A property schema used to promote the correlation data.
3. The orchestration. This has three possible outputs:
  • If the end date of the initial message has passed, the message is sent to a port marked as "timedout".
  • If the start date has passed already, the message will simply shoot out the end, unchanged.
  • If the start date has not yet passed the orchestration will sit and listen for further messages ofthe same type (and correlation set). New messages are used to overwrite the original data. When the start date arrives the last message to be received will be the one that is sent.
In reality it looks as if our messages won't contain the start and end dates, and that the orchestration will have to call a web service to get these dates (the dates are held in a separate
system), but this demo works pretty well, and gives a quick insight to convoys.

Enjoy.

STOP PRESS As Blogger doesn't host files, I'll have to host it from home, which I can't set up from here. I'll sort it asap.

Friday, October 15, 2004

Synchronicity

Ever since I first connected a mobile phone to a computer, c. 1998, I've been looking for synchro-nirvana, with a single view of all my contact and calendar info from computer to mobile to web. this is actually much more important to me than email which i'm not that bothered about. my various experiences with sync'ing mobile phones have been very frustrating, and the lack of investment in sync software from nokia (my preferred phone supplier ;-) ) has been very disappointing, to say the least.

Well, I've now gone and done it - bought myself a smartphone. sync'ing with the computer is great, as you'd expect. Activesync has been the bane of my life for a long time, but the latest version seems very stable, though that might be the USB cable, which is a little more reliable than irDA!

The more complex sync is from desktop to desktop. If I add up both client-site and home computers I've probably had five or six different 'main' sources of PIM this year alone. Sync'ing these is a lot more complicated, and has until now involved a CD with a 500MB .pst being burnt / imported at various intervals.

I've decided to end this nonsense, and have consequently settled on Plaxo as my central reference for all calendar and contact info (task and notes too, though these aren't so important). I now simply synch Plaxo with my work computer, home computer, parents' computer, laptop, AND smartphone.

Am i now the most synchronised man on the planet?

Autocomplete closure

Many of my friends and ex-colleagues are aware that I once disgraced myself by sending a humorous (and genuine I might add - it was NOT a photoshop-job) picture to my entire company as the result of an AutoComplete trauma; "XYZ Developers" turned out as "XYZ Global". (I'll post the picture at a later date when I'm back at home.)

I took a couple of things away from this experience:
1. Beware AutoComplete.
2. When you recall a message, the message isn't recalled silently, rather Exchange sends a message to the intended recipient asking if they agree to have the message recalled, which is a bit like adding a "READ ME" notice in size 48 font (bold) to the original message.

I survived, but I've been wary of AutoComplete ever since, especially as it also has a habit of storing invalid email addresses. It essentially scoops up any old cr*p that you type in the Tp/cc/bcc box and adds it to the list.

Imagine my delight, therefore, when I hit delete whilst highlighting an invalid old email address in the a/c list, whereupon it vanished. I did a quick google, and it turns out that this is a supported behaviour (see KB289975); however this can't be done with single entries.
A more radical suggestion is just to blitz the entire a/c list (KB287623) - take your pick.

Google desktop

No, not the annoying search toolbar that searches the internet, but a personal Googling of your local machine. Read some techie background on it here. Whilst we're on the subject, Picasa is excellent for managing images, and is owned by Google, as, I believe, is Blogger?

IPO or no-IPO, they do seem to be buying / producing good software. (I also love the fact that it works with Firefox - whose time has surely come.)

Wednesday, October 13, 2004

Grumpy old men, and computers

Whilst in Cambridge (see previous post) I'm staying with my parents in Suffolk. Last night I had to watch "Grumpy Old Men" on the tv - my father's new favourite programme, and of course the subject of computers came up, and how superfluous they are to daily life. I *think* I was expected to put up a spirited defence of them at some point, but instead I started thinking about the different ways in which people use them.

I hardly ever use my computer recreationally. I don't play games, I don't edit photos, I don't download music, I don't do my own accounts, write letters to the local paper, or study for a correspondance course at the Open University. In fact, when I'm not working with it, my home computer is really only used to store stuff (photos (unedited) off my digital camera, music ripped off my own CDs, contacts, calendar etc., etc.) It's basically a large virtual filing cabinet, which makes the demise of the Martian Netdrive all the more tragic.

For those of you who never saw this, for a brief moment in time a couple of years ago a company called Martian was selling a wireless, 'silent', hard drive, that you just plug in to the mains, and leave in a cupboard somewhere. Unfortunately it never really took off, and I believe they now work with OEMs rather than selling direct. Surely its time has come?

Mapping (old school)

I'm currently working in Cambridge (UK), and was looking up an address on the various mapping sites in the UK - primarily streetmap, and multimap. These two always give inconsistent results, and searching them effectively is something of a black art. (e.g. "East Road" gives only half a dozen matches with multimap, all in London, but "east road cambridge" gives me the correct match - even with "GB" selected? Yesterday multimap told me that my current postcode doesn't exist, whilst streetmap found it immediately - surely these guys use the same data???)

Anyway, whenever I find what I'm looking for I always go for the "aerial" button to gawp at the aerial photo, and this time I discovered the excellent map overlay that multimap have done. If I were american I'd tell you how cool this is, but I'm not, so I'll just let you figure it out for yourself here.

Monday, October 11, 2004

SOAP receive pipelines and missing messages.

I always thought that SOAP send and receive ports had to use the PassThruxxx pipelines, but I've found out today that not only is that not the case, the fact that I wasn't using an XmlReceive pipeline has been the cause of all my problems over the last couple of days.

If you use a PassThru promoted properties are not promoted (fairly obvious when you think about it), so receive shapes that have a correlation set attached are never activated, and messages go through with the "no matching subscription" error.

Aaargh.

Thursday, October 07, 2004

Flat file schemas and xs:date anomalies.

I've been struggling with a flat file schema for the past couple of days, and have been having some very inconsistent results with the xs:date datatype. I'm using a custom pipeline with flatfile disassembler to convert a pipe-delimited file to xml, then applying a map in the receive port to convert the output to a canonical form.

One of the fields is a date, in the form "dd MMM yyyy", e.g. "10 nov 1980". So, I cast this to the xs:date datatype in the schema, and set the "Custom date/time format" property to "dd MMM yyyy". This works very well when the date is there, but blows up when the field is missing, even though I've set the schema attribute "/schema/annotation/appinfo/@suppress_empty_nodes" to "true", which should cope with missing values.
The exception thrown is a pipeline exception, complaining about the format of the date. It appears as if the pipeline is attempting to cast the non-existant value to the date format before doing the null-value test. I *think* this is a bug?

My first work-around was to use a xs:string instead, in which case the suppress_empty_nodes attribute works, and no output element is produced for the missing value.
This morning, however, I've just used a standard xs:date datatype (without custom format), which should only accept the format "yyyy-mm-dd", rerun the test using "xxx" as the date in question, and it passed!!

So... custom date formats definitely cause a problem with missing values, but they do at least attempt to resolve the value to a date ("10 nnv 1980" threw an expected exception.)
Standard date formats seem (this morning) to accept anything you give it, but they do at least obey the suppress_empty_nodes instruction.

SP1 anyone (yes - I have installed the Rollup Package 1)?

(I originally posted the issue to the newsgroups here, thanks to those who replied.)


Tuesday, October 05, 2004

Car-crash IT

Seeing a tv programme described as car-crash tv the other day made me think about its IT equivalent. We've all seen it - executive gets sold on a BIG IDEA after a couple of conferences in California / Nice / New York (delete as appropriate), doesn't really understand it, and starts down the road to disaster, lip-licking consultants in tow.
You know it's going to go wrong from the very beginning, but are powerless to do anything (other than stand to the side and watch, transfixed as it disintegrates, hoping you can write it out of your CV!)

Those working in Government departments must have a particularly good view of this...

Web services everywhere == SOA ?

This is something that has been vexing me on my current project. My client has made a strategic decision to embrace web services, and thereby achieve SOA nirvana. Trouble is, no one here seems to know why an SOA is important, or how slapping web services on the front of all their legacy systems will help. The answer, of course, is that it won't, and that sticking a SOAP interface on a batch job is simply missing the point.

Drift back through the years, and you may remember the fanfare surrounding Bill G's "Business at the Speed of Thought". It foresaw the frictionless interchange of data within an organisation and beyond, bringing companies ever closer to their partners and customers. Executives would demand real-time business critical information, and companies would react to change in a fraction of the time... etc, etc. The idea got lost for a while, but is back with a vengeance along with the SOA, ESB and "real-time enterprise" (e.g. IBM's relentless "on-demand" adverts!)

The RTE redefines the way data is exchanged - it'll be on-demand, event-driven and message-based (you can add in loosely-coupled and asynchronous if you like that sort of thing). Item-level data changes can be broadcast to anyone who needs to know, and lengthy, error-prone, batch jobs will no longer be necessary* (is there such a thing as "downtime" in the global economy?)

This obviously requires rethinking the way your business operates.

So - you can't just SOAP your legacy systems and claim SOA victory. A well-designed SOA must include deep changes in your business processes, and if you find that after the consultants have left you're still working in exactly the same way, only with SOAP requests replacing flat-files, then ask for your money back!

-------
* I realise that batch jobs will still prevail in many areas, where large volumes of data are being exchanged - the important thing is to recognize where they still have value, and to learn to say NO when someone suggests replacing it with a web service!

Friday, October 01, 2004