Mohammed Atef’s Technical blog

WCF and WFF vs. BizTalk

Someone might look at Windows Workflow Foundation (WF) and Windows Communication Foundation (WCF) and ask why we are still using BizTalk Server.
Aren’t these exciting technologies replace BizTalk functionality obsolete? Yes today, WF and WCF are foundational platform technologies on which future Microsoft applications will be built upon.
They are both excellent at servicing particular problem areas around unified communication and workflow.
BizTalk is a framework for integration and business process automation that sits on top of the .NET Framework – and it will sit on top of WCF and WF later this year. BizTalk is used to build composite, loosely coupled applications on top of the services that WCF exposes. It provides things such as state management, compensation, access to various legacy protocols and applications and an administration model for operationalizing the applications. BizTalk is a broker that sits between various systems in order to abstract them from one another; it decouples them and eliminates point to point interfaces.
Similarly, while you can build a portal solution using Microsoft Internet Information Services (IIS) and ASP.NET technologies, is it replacing Microsoft SharePoint Server??!! I can build my own messaging solution using WCF and WF, but trying to design, build, and test such a solution takes me away from my primary goal of helping my organization solve business problems.

I Hope this help.


June 29, 2009 Posted by | Biztalk | , , | Leave a comment

AS/400 BizTalk Adapters

Microsoft® BizTalk® Server has providing a suite of new technology adapters. These include a new set of adapters for IBM Host Integration scenarios covering DB2, WebSphere MQ, Host Applications, and Host Files. In this post we shall look at WebSphere MQ and Host Files adapters in detail and explain their capabilities and use with AS/400.

WebSphere MQ Adapters
Now WebSphere MQ server on Windows is no longer required to connect to remote Queue Managers on non-Windows® systems. With the Server adapter a component is installed on the MQ computer that interacts with the MQ APIs.Let see MQ adapter architecture with BizTalk in the below image

The above image shows two environments are depicted, the BizTalk Server on Windows Server and an MQ Server which is running on AS/400. The MQSC adapter which enables sends and receives functionality using the chosen client. The MQ architecture for the Client works by receiving and sending messages to and from queues using a channel. A channel is a uni or bi-directional securable mechanism to perform reliable communication and is defined on a given Queue Manager on the server. Between the Client and the Queue Manager on both ends is the Message Channel Agent. It is the agent’s responsibility to control the send and receipt of messages.
Essentially there are three main components to using the Client adapter: MQ Client configuration, BizTalk Server port configuration, and the solution design itself. Let’s cover each of these in turn.

Client Configuration
MQ Client Configuration starts with a choice between the Base Client and Extended Client . The principle difference between the two is that the Extended Client is fully transactional while the Base Client is not. However, the BizTalk Server adapter is still able to guarantee at least once delivery with the Base Client due to the way it uses the client. Essentially, the adapter first retrieves a batch of messages non-destructively and only once they are safely in the MessageBox database are they removed from the queue. This is achieved using a cursor in MQ to enable locking and browsing of a queue without removing the messages. The same process occurs in reverse when sending messages, where the adapter checks the messages are safely on the queue before responding to the BizTalk Message Agent , which will then remove them from the MessageBox database. Because of this, it is possible in failure conditions to cause duplicate messages to be processed and your solution must take this into account if necessary. The Extended client ensures “once and once only” semantics.

BizTalk Server Configuration

There are a multitude of options here and we will consider the use of the most important ones here. The first thing to mention is that configuration is handled differently on Send and Receive. On the Receive side, the transport configuration is achieved through creating a Receive Location while on the Send side a Send Port is created and configured as shown below.

First, the connection between the BizTalk Server and the MQ Server must be defined. This is covered by four properties: channel name, connection name, queue and queue manager name.
The MQ adapter supports full ordered delivery on both the receive and send sides. When both are configured, First In, First Out (FIFO) processing all the way through BizTalk Server is possible .
Now let’s see how the orchestration is implemented as found in the below image

The above image shows an orchestration using the MQ client adapter and manual correlation to implement a solicit-response pattern. First, the orchestration must set the MQ header MQMD_CorrelID to a unique ID and a correlation set initialized containing this context property. Next, the message is sent to an MQ queue. After a period of time, a second message is received on a different queue. The adapter automatically copies this property and the receive port has a following correlation set – the same as that initialized in the send port. This ensures the second message is correlated to the first and the response message delivered to the correct orchestration. Using this approach, two different queues can be used to implement a more flexible approach to the solicit-response pattern.

for BizTalk Adapter for host files you can find this in this post

I hope this help.

June 24, 2009 Posted by | Biztalk | , , , | 2 Comments

BizTalk Adapter for Host Files

This adapter, as its name suggests, enables access to files in a host environment. The Host File adapter enables full CRUD (Create, Read, Update & Delete) access using a SQL-like syntax, implemented in a new managed provider. When used stand-alone the provider enables SQL to be used to provide a familiar ADO.NET programming model for data access. The adapter’s supports usage on both send and receive Ports.

Defining File Structures

There are two constituents to creating a BizTalk solution using the Host File adapter. The first is specific to the host file adapter, a file definition, and the second is the familiar schema generation discussed above in the context of the DB2 adapter. A schema is required to define the file data sent back and forth using XML whilst the file definition is needed to map the file’s contents to XML. Figure below shows this relationship in detail. Taking a look at the file definition first, Host Integration Server introduces a new Microsoft Visual Studio® project type, Host File, enabling the creation of host file definitions. A definition can be created in several ways; manually, through program import or from an existing Host Column Definition (HCD) file.


By selected “Add Generated Items” and the File Adapter we can now create the schema to use for transferring data between BizTalk Server and the Host. There are three options, updategrams, SQL SELECT or OS/400 Command. The SQL SELECT option allows the host file system to be queried using a SQL-like syntax. An example select contact,title from customer.

Configuring Ports

Once both the schema and metadata file assembly have been created a port can be configured. As already mentioned, the adapter supports both Send and Receive ports allowing File Polling Receive-side and one-way or solicit-response Send-side. Figure below shows the Receive Location property page for the File Adapter. The connection string can be provided manually or through the Data Source Wizard as with DB2. There are some additional options for host files, the principle one being the Metadata property, which contains the name of the metadata assembly containing the file definition.

Host File adapter does not support batching and returns only a single message for all data returned from a file in one operation. If single record/message processing is required, an envelope schema can be used to split the message up in the receive pipeline processing. This technique may be used whether a Receive Location or solicit-response Send port is being used to return data. The splitting of messages enables parallelism to be achieved allowing BizTalk Server to process multiple records simultaneously rather than having to de-batch them in an orchestration.
The Root element name/namespace are required to create the response messages from the adapter. The purpose of this is to ensure the incoming message (generated by the adapter) will match a particular schema. The SQL Command property enables a SQL query to be specified . Update Command allows the retrieved records to be updated or deleted as required. The principle use for this is to ensure that the same data is not picked up twice due to the polling nature of the adapter. If update is specified, the values for each field to update on each retrieved record must be specified. The URI property value specified must be unique.
Send side, Figure below shows the options. As you can see the configuration is much simpler, with only the connection string and document namespace/root name required with the unique URI. This is because all the information required for accessing the host file is provided in the message schema. It is to this configuration that we turn next.


Although the BizTalk Adapter for Host Files provides a great deal of functionality to process file data through BizTalk Server, there are a couple of limitations that you should be aware of. Firstly, the adapter is not transactional with respect to the Host File system. Although the Host will ensure failure cannot corrupt the file being updated, updates may be lost.
The second limitation is that the adapter does not support dynamic sends. A dynamic send is where the URI containing transport and endpoint address is specified at runtime. The adapter only enables the URI details to be specified on a static Send Port and BizTalk Server will not allow the adapter to be selected on Send Ports defined as dynamic in an orchestration.
I am going to prepare sample for host files adapters with BizTalk in few days.

I hope this help.

June 24, 2009 Posted by | Biztalk | , , | 1 Comment

How BizTalk Maps Work


All BizTalk developer have used Map Editor before. But did you ask before how it is working?!! Today I would like to tell you how BizTalk Map Editor is working in this simple post.

How it is working?

BizTalk maps works from Top to Bottom by other way from Target to Source, so BizTalk mapping Target from beginning to end, then Mapping rules are constructed and executed as links are encountered on the output side and data is extracted from the source when a link is encountered in the target at the end data is pulled into the file structure based on the structure of the output schema. I think this is the best way for parsing the Map because sometimes we set some target values without using and source elements.
Also BizTalk Map is designed to generate XML output and the map graphically displayed is actually an XSLT style sheet built to write output into the specific format defined by the output schema, so you can easily generate the *.xsl file containing the map code at any point in the development of your map and look to see how the map rules are being generated.

You can generate xsl Map code by executing the Validate Map function, found by right clicking the map name in the Solution Explorer.

You can try with any BizTalk Map you have and if you have good knowledge about XML and XSLT you can also understand how Maps built?

Understand XSL Map Code

The most component used in the XSL Map are
when you examine the code XSL Map code you find many unexpected variables. The compiler creates and uses internal variables for its own use, most often to contain the output of functoids. Any time a logical operation produces output that must be consumed by another operation or placed into the target model; the compiler creates an internal variable in which the value can be stored temporarily.

<xsl:variable name="var:v1" select="Address/text()"/>
<xsl:value-of select="$var:v1"/>

Here in this example we Map the source Address element to the Target ShipAddress.

Compiler-Generated Scripts

we can see C# script for each functoid or for any custom C# script we add using the Script Node.these scripts are found in the CDATA section at the end of the map


<msxsl:script language="C#" implements-prefix="usercsharp">
public string DateCurrentDate()
DateTime dt=DateTime.Now;
Return dt.ToString("yyyy-MM-dd");

This is very simple C# method used to retun the current date.

Optional Links

If you have element name FName in the source Schema and you directly linked this element to another on called Name int the Target Schema and the two elements are optional so you will find the xsl Map code as follow

<xsl:if test="FName">
<Name><xsl:value-of select="FName/text()"/></Name>

but if this two elements(FName,Name) are mandatory you will find the following

<Name><xsl:value-of select=”FName/text()”/></Name>

Using C# code

you can see a lot of calls to C# scripts in this XSL Map.for example you can find the following code for setting the Target Schema AddedDate Element is

<xsl:variable name="var:v1" select="usercsharp:DateCurrentDate()"/>
<xsl:value-of select="$var:v1"/>

I think it is clear now that BizTalk Map Mainly use simple XSL with C# scripts.

I hope this help.

June 21, 2009 Posted by | Biztalk | , | 3 Comments

BizTalk 2009 Orchestration Dehydration and Persistence Points

I am going to describe the Orchestration Dehydration and Persistence Points.this post for understanding why and how BizTalk use these.So let’s go quickly.

We know that BizTalk some times depending on other external application, so it is possible to ask BizTalk for waiting the other external application responses this my consume a lot of resources and memories, Instead of just waiting, the orchestration will dehydrate, and the state of the instance is taken out of memory and stored in SQL Server. BizTalk does this to free up valuable resources
for other processes and special subscription will be created for the dehydrated instance of the orchestration.This subscription will monitor the BizTalk message box for context properties
that match your unique correlation ID. When the subscription comes across that message, the
orchestration will “wake up,” and the state of the instance will be taken out of the database
and reconstituted in memory. The point at which the orchestration will, essentially, come alive
is known as a persistence point.
Persistence Points
The orchestration engine persists the entire state of a running orchestration instance at various points, so that the instance can later be completely restored in memory.
The state includes
1. The internal state of the engine, including its current progress.
2. The state of any .NET components that maintain state information and are being used by the orchestration.
3. Message and variable values.
if a message is received but there is an unexpected shutdown before state can be saved, the engine will not record that it has received the message, and will receive it again upon restarting. The engine will save the state in the following circumstances:

1. The end of a transactional scope is reached.
The engine saves state at the end of a transactional scope so that the point at which the orchestration should resume is defined unambiguously, and so that compensation can be carried out correctly if necessary.
The orchestration will continue to run from the end of the scope if persistence was successful; otherwise, the appropriate exception handler will be invoked. If the scope is transactional and atomic, the engine will save state within that scope.If the scope is transactional and long-running, the engine will generate a new transaction and persist the complete state of the runtime.
2.Debugging breakpoint is reached.
3. message is sent.The only exception to this is when a message is sent from within an atomic transaction scope.
4. The orchestration starts another orchestration asynchronously, as with the Start Orchestration shape.
5. The orchestration instance is suspended.
6. The system shuts down under controlled conditions. in that case, when the engine next runs, it will resume the orchestration instance from the last persistence point that occurred before the shutdown.
7. The engine determines that the instance should be dehydrated.
8.The orchestration instance is finished.

I hope this help…………

June 20, 2009 Posted by | Biztalk | , , | 1 Comment

Using JavaScript with RadAjaxPanel

Today I was working in some stuff for using collapsible +, – operators to show and hide RadAjaxPanel. I will tell you the business scenario and the problem I have faced and how it was solved to share this knowledge with you.
Business Scenario
I have one user control contain RadAjaxPanel control and two images one for + operator and the second for minus operator. And I was using this user control into default.aspx page. I am trying to develop simple JavaScript staff for showing the RadAjaxPanel if I clicked + image and hide it if I clicked – image.
Problems and solutions

1- JavaScript I have added not working why???
this because I must set RadAjaxPanel property named EnableOutsideScripts=true;
and add this javascript code into my page.

2- How to hide one of the two images when page loaded?
I did the normal coding I just set the image control property visible=flase;
but it is not working I receive JavaScript error:’ image name’ object not found.why???!!!!!!

3- How to hide any server control without errors in JavaScript using C# code?
to hide server control using C# code without any JavaScript can set the display property to none from code like ‘ControlName’.Style[“display”] = “none”;

4- Can I use style.display for RadAjaxPanel?

No you cannot use style.display into RadAjaxPanel because it does not has style object. so you need to add this RadAjaxPanel into and container like div control and set it’s style.display.

Now all functionality is working fine.
I hope this help.

June 15, 2009 Posted by |, Developement | , , | 2 Comments

BizTalk 2009 Orchestration Debugging

HAT in BizTalk 2006,2009

In BizTalk 2006 we was debugging orchestration using HAT(Health And Activity) BizTalk 2006 Tool or writing BTZHATApp.exe in the Run windows.
Now BizTalk 2009 Move this functionality from HAT tools to BizTalk Application Console(MMC).

Debug Orchestration in BizTalk 2009

To debug any orchestration in BizTalk 2009 you can make query for running instances or in-progress instances which found in the New Query tab of the BizTalk Group.
After that you can filter you Query buy instance status, number of records, specific instances names,..etc.
if you find you orchestration in the result Right click the orchestration name and select Orchestration debugger as shown in the below image
debugorch Now after Orchestration debugger screen has been opened you can set you break point into any Orchestration shapes.
I Hope this help.

June 13, 2009 Posted by | Biztalk | , , | 3 Comments

Import orchestration Issue

Can you import one orchestration to two applications in BizTalk?
Answer is no, you can import the same orchestration to only one application and if you tried to re-import orchestration to application but buy checked the overwrite checkbox.
Okay, now you know the correct scenario for importing Orchestration. Let’s see this issue that I have faced before.

Did you face this exception before while you are importing Orchestration to an Application ‘asembly xxxx is already stored ’?
If yes, let’s see why this exception thrown and how to solve it

This exception happens because you are trying to import on orchestration into an application but this orchestration found in another application.

You can solve this issue by following these steps
1- Remove the old orchestration from the orchestration list in the old application.

2- Remove the assembly name from resources list in the old application.

I hope this help.

June 11, 2009 Posted by | Biztalk | , | Leave a comment

Select WCF Binding

While I was reading this book Addison.Wesley.Essential.Windows.Communication.Foundation, I read very useful and important topic about WCF selects binding. so I would like to share this nice information to all.


Actually there are nine preconfigured bindings in WCF, Each of these provides the means for a particular distributed computing need, there are several factors that  determine which binding to choose for a specific application, including security, interoperability, reliability, performance, and transaction requirements. You can study these cases throw the below image.
Each of the bindings supports a particular communication scenario, such as cross- machine, on-machine, and interoperable communication using Web services; I will explain each communication scenario in details with very simple example.


 The netTcpBinding binding is designed to support communication between .NET applications that are deployed on separate machines across a network, including communication across intranets and the Internet. This type of communication called cross-machine communication.
Address format: net.tcp://{hostname}[:port]/{service location}
Default port number:808

netTcpBinding Service Configuration

<service name="EssentialWCF.Service1" behaviorConfiguration="EssentialWCF.Service1Behavior">
            <add baseAddress="net.tcp://localhost/EssentialWCF"/>
<endpoint address="" binding="netTcpBinding" contract="EssentialWCF.IService1"></endpoint>

netTcpBinding Client Configuration

<endpoint address="net.tcp://localhost/Service1.svc" binding="netTcpBinding" contract="ServiceReference1.IService1"></endpoint>


 WCF supports interprocess and intraprocess communication scenarios with the netNamedPipeBinding binding.the netNamedPipeBinding binding leverages a named pipes transport. This is a great binding to use for doing interprocess communication (IPC) because it provides a significant performance increase over the other standard bindings available in WCF.
This type of communication called Local machine communication.
Address format: net.pipe://{hostname}/{service location}
Default port number:808



netNamedPipeBinding Service Configuration
<service name="EssentialWCF.Service1" behaviorConfiguration="EssentialWCF.Service1Behavior">
            <add baseAddress="net.pipe://localhost/EssentialWCF"/>
<endpoint address="" binding="netNamedPipeBinding" contract="EssentialWCF.IService1"></endpoint>
netNamedPipeBinding Client Configuration


<endpoint address="net.pipe://localhost/Service1.svc" binding="netNamedPipeBinding" contract="ServiceReference1.IService1"></endpoint>


The basicHttpBinding binding offers support for Web service communication based on the WS-I Basic Profile 1.1 (WS-BP 1.1) specification. This includes standards such as SOAP 1.1, WSDL 1.1, and Message Security 1.0. This type of communication called web service communication.

Address format: http://{hostname}/{service location} and https://{hostname}/{service location}
Default port number:80 for http and 443 for https

basicHttpBinding Service Configuration

basicHttpBinding Client Configuration

<endpoint address="http://localhost/Service1.svc" binding=" basicHttpBinding " contract="ServiceReference1.IService1"></endpoint>


The wsHttpBinding binding provides interoperable communication across heterogeneous platforms as well as advanced infrastructure level protocols, such as security, reliable messaging, and transactions. The wsHttpBinding binding is the default binding in .NET Framework 3.0 whenever you need interoperable communication based on Web services. This type of communication called Advanced web service communication.

Address format: http://{hostname}:{port}/{service location} and https://{hostname}:{port}/{service location}Default port number:80 for http and 443 for https

wsHttpBinding Service Configuration

wsHttpBinding Client Configuration

<endpoint address="http://localhost/Service1.svc" binding=" wsHttpBinding " contract="ServiceReference1.IService1"></endpoint>


.NET Framework 3.5 introduces a new binding for Web service interoperability called the ws2007HttpBinding binding. This binding is similar to the

ws2007HttpBinding binding except that it supports the latest WS-* standards available for messaging, security, reliable messaging, and transactions.This type of communication called Advanced web service communication.

Address format: http://{hostname}:{port}/{service location} and https://{hostname}:{port}/{service location}Default port number:80 for http and 443 for https

ws2007HttpBinding Service Configuration

ws2007HttpBinding Client Configuration

<endpoint address="http://localhost/Service1.svc" binding=" ws2007HttpBinding " contract="ServiceReference1.IService1"></endpoint>


The wsDualHttpBinding binding is similar to the wsHttpBinding binding, with additional support for duplex communication and lack of support for transport-level security. This type of communication called Advanced web service communication.

Address format: http://{hostname}:{port}/{service location}
Default port number:80 for http

wsDualHttpBinding Service Configuration

<service name="EssentialWCF.Service1" behaviorConfiguration="EssentialWCF.Service1Behavior">
            <add baseAddress="http://localhost/EssentialWCF"/>
<endpoint address="" binding="wsDualHttpBinding" contract="EssentialWCF.IService1"></endpoint>

wsDualHttpBinding Client Configuration

<endpoint address="http://localhost/Service1.svc" binding="wsDualHttpBinding" contract="ServiceReference1.IService1"></endpoint>

Now I think we can decide with WCF binding type we can use in our solutions.

I hope this help.

June 9, 2009 Posted by | WCF | , , , , , , | 1 Comment

BizTalk Custom Excel Pipeline Component

Did you tried to process excel file using BizTalk before? i was trying to convert Excel file to XML file using BizTalk, really we have a lot of ways around to do this but i think the best way is to do this using custom Pipeline component.
Create Custom Excel Pipeline
Here i will explain to you how to create custom Pipeline component from A to Z.
your first task is to add reference Microsoft.BizTalk.Pipeline.dll after that you must consider the implementation of the custom pipeline component. There are three logical areas to a custom pipeline component to consider:
• Attributes and class declaration
• Design-time properties
• Implementation of the four pipeline interfaces: IBaseComponent, IComponentUI,
IPersistPropertyBag, and IComponent

I will describe each one now.
Attributes and Class Declaration
Here is the header section from the sample code in Listing 1-1

public class DecodeExcelC :

in the previous code we determine the component category is pipelinecomponent and decode component then we will inherit the pipeline interfaces, i will describe each interface later in this post.
Design-Time Properties
Custom component design-time properties are exposed via public declarations and appropriate get/set methods. The following is the section of Listing 1-2 that demonstrates how two design-time properties are exposed.

private string connectionString = null;
[System.ComponentModel.Description("Excel Connection String")]
public string ConnectionString
    get { return connectionString; }
    set { connectionString = value; }
private string filter = null;
[System.ComponentModel.Description("Filter for Select Statement")]
public string Filter
    get { return filter; }
    set { filter = value; }
private string sqlStatement = null;
[System.ComponentModel.Description("Select Statement to Read ODBC Files.")]
public string SqlStatement
    get { return sqlStatement; }
    set { sqlStatement = value; }
private string tempDropFolderLocation = null;
[System.ComponentModel.Description("Temp Folder for Dropping ODBC Files.")]
public string TempDropFolderLocation
    get { return tempDropFolderLocation; }
    set { tempDropFolderLocation = value; }
private bool deleteTempMessages;
[System.ComponentModel.Description("Delete Temp Messages after processing")]
public bool DeleteTempMessages
    get { return deleteTempMessages; }
    set { deleteTempMessages = value; }
private string fnamespace = null;
[System.ComponentModel.Description("NameSpace for resultant XML Message, for example:")]
public string NameSpace
    get { return fnamespace; }
    set { fnamespace = value; }
private string rootNode = null;
[System.ComponentModel.Description("Root Node Name for resultant XML Message")]
public string RootNodeName
    get { return rootNode; }
    set { rootNode = value; }
private string dataNode = null;
[System.ComponentModel.Description("Data Node Name for resultant XML Message rows")]
public string DataNodeName
    get { return dataNode; }
    set { dataNode = value; }

Implementing the Pipeline Interfaces
i will explain how to use each interface in my custom pipeline as follows
IBaseComponent contains three read-only properties that return the description, version, and name of the component to the design-time environment and other tools interested in basic component information. Implementing the IBaseComponent is straightforward and requires implementing only the three read-only properties. Here is the section of the code in Listing 1-3.

string IBaseComponent.Description
    get { return "BizTalk Receive Pipeline Component for Decoding Excel Files"; }
string IBaseComponent.Name
    get { return "Excel File decoder"; }
string IBaseComponent.Version
    get { return "1.0"; }

IComponentUI serves to present the component icon in the design-time tool set. The two methods implemented in the IComponentUI are Icon and Validate. The Icon method provides a pointer to the graphic icon displayed in the design-time user interface. If no icon is specified,Visual Studio will display the default icon in the BizTalk Pipeline Components section of the Toolbox. The Validate method allows processing of any design-time properties. The following portion of Listing 1-4 shows both the Validate and Icon methods.

IntPtr IComponentUI.Icon
               ResourceManager rm = new ResourceManager("ODBCPipelineComponent.Resource", Assembly.GetExecutingAssembly());
               Bitmap bm = (Bitmap)rm.GetObject("odbc");
               return bm.GetHicon();
       System.Collections.IEnumerator IComponentUI.Validate(object projectSystem)
           return null;

The purpose of the IPersistPropertyBag interface is to provide access to your object to unmanaged code. If you are familiar with .NET, then you may have used property bags in other projects. IPersistPropertyBag also allows access to design-time configuration values. There are four public methods that exist in the IPersistPropertyBag interface: GetClassID, initNew, Load, and Save.

The GetClassID function must return a unique ID that represents the component. The initNew function can be used to establish structures used by the other IPersistPropertyBag methods. The final functions facilitate the loading and saving of property values.The following portion of the code from Listing 1-5 demonstrates the implementation of the four IPersistPropertyBag

void IPersistPropertyBag.GetClassID(out Guid classID)
           classID = new Guid("71A3FBC6-F5D6-4fd6-A17D-1664A58C7E68");

       void IPersistPropertyBag.InitNew()

       void IPersistPropertyBag.Load(IPropertyBag propertyBag, int errorLog)

           object valConnectionString = null,
                   valtempDropFolderLocation = null,
                   valSqlStatement = null,
                   valDeleteTempMessages = null,
                   valRootNodeName = null,
                   valNameSpace = null,
                   valDataNodeName = null,
                   valFilter = null;                     

               propertyBag.Read("ConnectionString", out valConnectionString, 0);
               propertyBag.Read("TempDropFolderLocation", out valtempDropFolderLocation, 0);
               propertyBag.Read("SqlStatement", out valSqlStatement, 0);
               propertyBag.Read("DeleteTempMessages", out valDeleteTempMessages, 0);
               propertyBag.Read("RootNodeName", out valRootNodeName, 0);
               propertyBag.Read("NameSpace", out valNameSpace, 0);
               propertyBag.Read("DataNodeName", out valDataNodeName, 0);
               propertyBag.Read("Filter", out valFilter, 0);

           catch (ArgumentException argEx)
              // throw argEx;
           catch (Exception ex)
               throw new ApplicationException("Error reading propertybag: " + ex.Message);

           if (valFilter != null)
               Filter = (string)valFilter;
               Filter = "";

           if (valConnectionString != null)
               ConnectionString = (string)valConnectionString;
               ConnectionString = "";

           if (valtempDropFolderLocation != null)
               TempDropFolderLocation = (string)valtempDropFolderLocation;
               TempDropFolderLocation = "";

           if (valSqlStatement != null)
               SqlStatement = (string)valSqlStatement;
               SqlStatement = "";

           if (valDeleteTempMessages != null)
               DeleteTempMessages = (bool)valDeleteTempMessages;
               DeleteTempMessages = true;

           if (valRootNodeName != null)
               RootNodeName = (string)valRootNodeName;
               RootNodeName = "";

           if (valNameSpace != null)
               NameSpace = (string)valNameSpace;
               NameSpace = "";

           if (valDataNodeName != null)
               DataNodeName = (string)valDataNodeName;
               DataNodeName = "";

       void IPersistPropertyBag.Save(IPropertyBag propertyBag, bool clearDirty, bool saveAllProperties)

           object valConnectionString = (object)ConnectionString;
           propertyBag.Write("ConnectionString", ref valConnectionString);

           object valtempDropFolderLocation = (object)TempDropFolderLocation;
           propertyBag.Write("TempDropFolderLocation", ref valtempDropFolderLocation);

           object valSqlStatement = (object)SqlStatement;
           propertyBag.Write("SqlStatement", ref valSqlStatement);

           object valDeleteTempMessages = (object)DeleteTempMessages;
           propertyBag.Write("DeleteTempMessages", ref valDeleteTempMessages);   

           object valRootNodeName = (object)RootNodeName;
           propertyBag.Write("RootNodeName", ref valRootNodeName);

           object valNameSpace = (object)NameSpace;
           propertyBag.Write("NameSpace", ref valNameSpace);

           object valDataNodeName = (object)DataNodeName;
           propertyBag.Write("DataNodeName", ref valDataNodeName);

           object valFilter = (object)Filter;
           propertyBag.Write("Filter", ref valFilter);


IComponent is the most important interface in the component, as it contains the processing logic for messages. This interface contains a single method, Execute, which takes two parameters.BizTalk calls the Execute method to process the message, and then passes the message and the context of the message as the two parameters. The following outlines the Execute method declaration and the two required parameters in Listing 1-6

   1: IBaseMessage Microsoft.BizTalk.Component.Interop.IComponent.Execute(IPipelineContext pContext, IBaseMessage pInMsg)
   2:         {
   3:             System.Diagnostics.Debug.WriteLine("At top of Execute method for DBASE pipeline");
   4:             IBaseMessagePart bodyPart = pInMsg.BodyPart;
   5:             if (bodyPart != null)
   6:             {
   7:                 try
   8:                 {
   9:                     // First write the ODBC file to disk so can query it.
  10:                     BinaryReader binaryReader = new BinaryReader(bodyPart.Data);
  11:                     string folderName = this.TempDropFolderLocation;
  12:                     if (folderName.Substring(folderName.Length - 1, 1) != "\\")
  13:                         folderName += "\\";
  14:                     string extension =".xls";
  15:                     string filename = System.IO.Path.GetRandomFileName();
  16:                     filename = filename.Remove(8);
  17:                     filename += extension;
  18:                     string folderNameAndFileName = folderName + filename;
  19:                     FileStream fileStream = new FileStream(folderNameAndFileName, FileMode.CreateNew);
  20:                     BinaryWriter binaryWriter = new BinaryWriter(fileStream);
  21:                     binaryWriter.Write(binaryReader.ReadBytes(Convert.ToInt32(binaryReader.BaseStream.Length)));
  22:                     binaryWriter.Close();
  23:                     binaryReader.Close();
  24:                     // Create the Connection String for the ODBC File
  25:                     string dataSource;
  26:                     dataSource = "Data Source=" + folderNameAndFileName + ";";                    
  27:                     string odbcConnectionString = this.connectionString;
  28:                     if (odbcConnectionString.Substring(odbcConnectionString.Length - 1, 1) != ";")
  29:                         odbcConnectionString += ";";
  30:                     odbcConnectionString += dataSource;
  31:                     OleDbConnection oConn = new OleDbConnection();
  32:                     oConn.ConnectionString = odbcConnectionString;
  33:                     // Create the Select Statement for the ODBC File
  34:                     OleDbDataAdapter oCmd;
  35:                     // Get the filter if there is one
  36:                     string whereClause = "";
  37:                     if (Filter.Trim() != "")
  38:                         whereClause = " Where " + Filter.Trim();
  39:                     oCmd = new OleDbDataAdapter(this.SqlStatement.Trim() + whereClause, oConn);
  40:                     oConn.Open();
  41:                     // Perform the Select statement from above into a dataset, into a DataSet.
  42:                     DataSet odbcDataSet = new DataSet();
  43:                     oCmd.Fill(odbcDataSet, this.DataNodeName);
  44:                     oConn.Close();
  45:                     // Delete the message
  46:                     if (this.DeleteTempMessages)
  47:                         System.IO.File.Delete(folderNameAndFileName);
  48:                     // Write the XML From this DataSet into a String Builder
  49:                     System.Text.StringBuilder stringBuilder = new StringBuilder();
  50:                     System.IO.StringWriter stringWriter = new System.IO.StringWriter(stringBuilder);
  51:                     odbcDataSet.Tables[0].WriteXml(stringWriter);
  52:                     System.Xml.XmlDocument fromDataSetXMLDom = new System.Xml.XmlDocument();
  53:                     fromDataSetXMLDom.LoadXml(stringBuilder.ToString());
  54:                     // Create the Final XML Document. Root Node Name and Target Namespace
  55:                     // come from properties set on the pipeline
  56:                     System.Xml.XmlDocument finalMsgXmlDom = new System.Xml.XmlDocument();
  57:                     System.Xml.XmlElement xmlElement;
  58:                     xmlElement = finalMsgXmlDom.CreateElement("ns0", this.RootNodeName, this.NameSpace);
  59:                     finalMsgXmlDom.AppendChild(xmlElement);
  60:                     // Add the XML to the finalMsgXmlDom from the DataSet XML, 
  61:                     // After this the XML Message will be complete
  62:                     finalMsgXmlDom.FirstChild.InnerXml = fromDataSetXMLDom.FirstChild.InnerXml;
  63:                     Stream strm = new MemoryStream();
  64:                     // Save final XML Document to Stream
  65:                     finalMsgXmlDom.Save(strm);
  66:                     strm.Position = 0;
  67:                     bodyPart.Data = strm;
  68:                     pContext.ResourceTracker.AddResource(strm);
  69:                 }
  70:                 catch (System.Exception ex)
  71:                 {
  72:                     throw ex;
  73:                 }
  74:             }
  75:             return pInMsg;
  76:         }

I will explain this method now because this method contain all logic for my pipeline component. in line 3 i am getting the message body from the messagebase object by saving bodypart in Ibasemessagepart object.

from Line 10 to 23 i am saving the file content in memory to use it query it easily.

from Line 25 to 44 i am generating connection string to the excel file then read the data from the excel file into data set using OLEDB provider.

from Line 46 to 75 i am generate the new XML.

Now we have finished the custom pipeline you can download the full pipeline component project by click Here .
I hope this help.

June 7, 2009 Posted by | Biztalk | , , , | 19 Comments