Monday, May 4, 2009

Layers & Pipes and Filters pattern

The Architectural Patterns

The Layers pattern helps to structure applications that can be decomposed into groups of subtasks in which each group of subtasks is at a particular level of abstraction.

The Pipes and Filters pattern provides a structure for systems that process a stream of data. Each processing step is encapsulated in a filter component. Data is passed through pipes between adjacent filters. Recombining filters allows you to build families of related systems.

The Blackboard pattern is useful for problems for which no deterministic solution strategies are known. In Blackboard several specialized subsystems assemble their knowledge to build a possibly partial or approximate solution.

The Broker pattern can be used to structure distributed software systems with decoupled components that interact by remote service invocations. A broker component is responsible for coordinating communication, such as forwarding requests, as well as for transmitting results and exceptions.

The Model-View-Controller pattern (MVC) divides an interactive application into three components. The model contains the core functionality and data. Views display information to the user. Controllers handle user input. Views and controllers together comprise the user interface. A change-propagation mechanism ensures consistency between the user interface and the model.

The Presentation-Abstraction-Control pattern (PAC) defines a structure for interactive software systems in the form of a hierarchy of cooperating agents. Every agent is responsible for a specific aspect of the application's functionality and consists of three components: presentation, abstraction, and control. This subdivision separates the human-computer interaction aspects of the agent from its functional core and its communication with other agents.

The Microkernel pattern applies to software systems that must be able to adapt to changing system requirements. It separates a minimal functional core from extended functionality and customer-specific parts. The microkernel also serves as a socket for plugging in these extensions and coordinating their collaboration.

The Reflection pattern provides a mechanism for changing structure and behavior of software systems dynamically. It supports the modification of fundamental aspects, such as type structures and function call mechanisms. In this pattern, an application is split into two parts. A meta level provides information about selected system properties and makes the software self-aware. A base level includes the application logic. Its implementation builds on the meta level. Changes to information kept in the meta level affect subsequent base-level behavior.


The Design Patterns

The Whole-Part pattern helps with the aggregation of components that together form a semantic unit. An aggregate component, the Whole, encapsulates its constituent components, the Parts, organizes their collaboration, and provides a common interface to its functionality. Direct access to the Parts is not possible.

The Master-Slave pattern supports fault tolerance, parallel computation and computational accuracy. A master component distributes work to identical slave components and computes a final result from the results these slaves return.

The Proxy pattern makes the clients of a component communicate with a representative rather than to the component itself. Introducing such a placeholder can serve many purposes, including enhanced efficiency, easier access and protection from unauthorized access.

The Command Processor pattern separates the request for a service from its execution. A command processor component manages requests as separate objects, schedules their execution, and provides additional services such as the storing of request objects for later undo.

The View Handler pattern helps to manage all views that a software system provides. A view handler component allows clients to open, manipulate and dispose of views. It also coordinates dependencies between views and organizes their update.

The Forwarder-Receiver pattern provides transparent inter-process communication for software systems with a peer-to-peer interaction model. It introduces forwarders and receivers to decouple peers from the underlying communication mechanisms.

The Client-Dispatcher-Server pattern introduces an intermediate layer between clients and servers, the dispatcher component. It provides location transparency by means of a name service, and hides the details of the establishment of the communication connection between clients and servers.

The Publisher-Subscriber pattern helps to keep the state of cooperating components synchronized. To achieve this it enables one-way propagation of changes: one publisher notifies any number of subscribers about changes to its state.

Design Pattern: Layers

Layers is an architectural design pattern that structures applications so they can be decomposed into groups of subtasks such that each group of subtasks is at a particular level of abstraction.

Some Examples

The traditional 3-tier client server model, which separates application functionality into three distinct abstractions, is an example of layered design. Much has been written about the 3-tier client-server model and I wont discuss it further, other to say that this is the result of layered design thinking.

clip_image001
Figure 1: A simplified view of the 3-tier client-server architecture.

In a more general sense, the OSI 7-layer networking model and the Internet Protocol Stack, both illustrated in Figure 2, are networking protocols that illustrate the use of layering in network architecture.

clip_image002

Figure 2: The OSI 7-layer model, largely supplanted by the more recent and popular Internet Protocol Stack.

Here is a brief table describing the layers of the OSI 7-layer model.

Layer

Role

Application

Provides services to user. Examples include telnet, TCP, HTTP.

Presentation

Structures information and attaches semantics.

Session

Provides dialog control and synchronization facilities.

Transport

Responsible for segmenting long messages into packets. Recovers lost packets with acknowledgments and retransmissions. Flow control. Congestion control. Breaks messages into packets and guarantees delivery.

Network

Responsible for routing packets from source to destination host. Selects a route from sender to receiver.

Data Link

Responsible for moving packet from one node (host or packet switch) to next node. Error detection and correction. Medium access.

Physical

The physical layer, the lowest layer in the OSI stack, is responsible for moving information between two systems connected by a single physical link. The physical layer provides the abstraction of bit transport, independent of the link technology. Specifies voltage levels, bit spacings.

The OSI 7-layer model is a cool example because it neatly shows the general types of services required for computers to talk to each other. The Internet protocol stack is a refinement of the OSI 7-layer model, minus the Presentation and Session layers, whose services are either not needed or abstracted into the neighboring layers.

Note that each of the layers in the OSI stack don’t necessarily function on distinct hardware or memory space. For example, its common to find the Data Link and Physical layers tightly coupled and interleaved (for performance reasons) within the same Ethernet network interface card.

Context

A large system requires decomposition. One way to decompose a system is to segment it into collaborating objects. In large systems a first-cut rough model might produce hundreds or thousands of potential objects. Additional refactoring typically leads to object groupings that provide related types of services. When these groups are properly segmented, and their interfaces consolidated, the result is a layered architecture.

Benefits

· Segmentation of high-level from low-level issues. Complex problems can be broken into smaller more manageable pieces.

· Since the specification of a layer says nothing about its implementation, the implementation details of a layer are hidden (abstracted) from other layers.

· Many upper layers can share the services of a lower layer. Thus layering allows us to reuse functionality.

· Development by teams is aided because of the logical segmentation.

· Easier exchange of parts at a later date.

Downsides

The trouble with layers of computer software is that sooner or later you loose touch with reality. Layers are abstraction boundaries, and the more they encapsulate their works the more one is unaware of the applications inner works.

Layering is a form of information hiding. A layering violation occurs in situations where a layer uses knowledge of the implementation details of another layer in its own operations. At the limit this leads to changes to one layer resulting in changes to every other layer, which is an expensive and error prone proposition.

Layering can lead to poor performance. To avoid this penalty, in situations where an upper layer can optimize its actions by knowing what a lower layer is doing, we can reveal information that would normally be hidden behind a layer boundary.

The layers must be engineered at the outset, before the system is built.

Forces

The following is a partial list of forces that bring about layered architectures. Note that some of these forces are present to varying degrees in all software systems.

Late source code changes should not ripple through the system

Interfaces should be stable.

Parts should be exchangeable.

Possibility of building other systems at a later date with the same low-level issues as the system currently being designed.

Similar responsibilities should be grouped to help understandability and maintainability.

The system will be built by a team of programmers, and work has to be subdivided along clear boundaries.

A Layered model does not imply that each layer should be in a separate address space. Efficient implementations demand that layer-crossings be fast and cheap. Examples: User Interfaces may need efficient access to field validations.

Structure

clip_image003

Class: Layer J

Responsibility: Provides services used by Layer J+1 and delegates subtasks to Layer J-1

Collaborator: Layer J-1

Dynamics

Here are some typical interactions in layered architectures.

Delegation of Requests

Messages that percolate downwards between layers are called Requests.  For example, a client issues a request to Layer J. What Layer J cannot fulfill, it delegates to Layer J-1. Note that Layer J often translates requests from Layer J+1 into several requests to Layer J-1.

Notifications

Messages that percolate upward between layers are called Notifications. A notification could start at layer J where, for example, an observer object detects an observable event. Layer J then formulates and sends a message (notification) to Layer J+1.

Caching Layers

Layers are logical places to keep information caches. Requests that normally travel down through several layers can be cached to improve performance.

Intra- and inter-Application Communications

A systems programming interface is often implemented as a layer. Thus if two applications (or inter-application elements) need to communicate, placing the interface responsibilities into dedicated layers can greatly simplify the other applications layers and, as a bonus, make them more easily reusable.

Some Layers to Consider –

The GUI Layer

The principle of separating the user interface from the application proper is old. It is rarely practiced, for all the talk we devote to it. The principle of separating the user interface from the application has the hardest consequences and is hardest to follow consistently. For the most part most applications barely separate the GUI from the application code, though they claim otherwise.

Imagine the application is totally program driven, with the user interface just one driving program. Im not talking simply of separating the interface code from the application code, I mean separate GUI and application components.

In other words: The GUI is not a part of the application. It is the first client of the application.

A typical VFP application has too much special-purpose code in the GUI. This is mostly a fundamental problem with the IDE: The easiest thing to do, by far, is to put code in those GUI control methods.

Why so much GUI code? One cause of ballooned of GUI code is that it does things that should be done by the model which well define, for now, as simply another layer somewhere.  Another cause of GUI code bloat is many people tend to embed code to maintain various kinds of integrity among objects. The result of these things is know it all controls.

Another mistake many developers fall into is GUI elements that pull the data to display directly in from the domain model and then having the GUI elements update the domain objects on any changes being made. Again, nothing could be easier in VFP!

Possibly better is a system of nave controls that rely on separate Renderer objects to fill them with data from the domain model objects, and to update the domain objects with the changes made by the user.

It is easier on the user if input errors are brought up directly upon entry. Having the UI outside the application separates input from error detection. First the UI will change, so isolate and make an interface to it. Then someone will remove the human from the picture entirely, with electronic interchange or another application driving the program. Therefore, just making an interface to the UI component is not sufficient, it has to be an interface that does not care about the UI.

Therefore, put the UI totally outside the application. The application proper is bounded by a program-driven interface, and the UI is just one user of that interface, perhaps not even the first. Other users of the application could be another controlling application, or a testing application. Once the GUI is separate, almost anything is possible.

Downsides

If the UI is really outside the application, what about when the user starts and then cancels a modification - where are the editing and rollback copies of the object kept? In the UI, or outside the application? What about typing errors - are they detected in the GUI or inside the application?

Answer: Keep edits in the GUI, of course!  The user fumbling around in a GUI is a reality of a GUI. You want idiomatic GUI behavior and effects encapsulated in the GUI. Imagine writing a second program-driven interface, and having to deal with all these messages reminding the automated program in real-time that, say, a field is required. Yech!

Isolation Layers

Building systems requires us to bring many varied and unrelated concepts together. A typical medium or large sized system might involve diverse concepts like domain functionality, transactions, meta-data, database technology, network communication protocols, OS API calls, a GUI, etc.

Given pressure to quickly produce a reasonably fast system, its tempting to tie these concepts closely together and so embed transaction-control code in the GUI code, or OS API code in the business code. This leads systems that are:

Hard to change: if you want to change the transaction-control system you need to scour all the GUI code to find all transaction-control related stuff

Hard to understand: business code and OS API  code are, in their own right, complex and hard to understand. Mix them together and the complexity multiplies before your very eyes.

Hard to write: if you're writing business code the last thing you want to be worrying about is catching os exceptions.

Therefore...

Write a layer of software to isolate each disparate concept or technology. These layers should isolate at the conceptual level (perhaps business code really needs to know nothing about the OS API - this is all handled transparently by some object management code) and/or at the technical level (handling unhandled exceptions raised by the object management code so they don't find their way into the business code). Isolation should be two-way (the business code 'knows' nothing of the OS API code and vice-versa).

This leads systems that are:

Easier to change: by isolating the database from the communication code we can change one or the other with minimum impact

Easier to understand: each 'bit' of the system deals with only one concept: business, networks, database

Easier to write: business people can write business code that isn't polluted with code to display dialogue boxes or handle network exceptions.

Of course the Isolation Layer itself may be complex and, possibly, represents a single point of failure. Over-application of the pattern leads to a system where everything is strongly decoupled and so the effects of system events may be unpredicatable and design or change is always 'selfish': distribution is hidden from the business designer and so they design without any thought for distribution - something which could bring the system to its knees.

Discussion: The choice a layered architecture can have many beneficial effects on your application if it is applied in the proper way. First, since the architecture is so simple, it is easy to explain to team members and so demonstrate where each object's role fits into the "big picture". If a designer is very strict about clearly defining where objects fit within the layers, and the interfaces between the layers, then the potential for reuse of many objects in the system can be greatly increased.

A common problem with many object designs is that they are too tightly constrained to the limits of the particular application being built. Many designers tend to put too much of the logic of an application in the GUI layer. In this case, there are few, if any, domain objects that are potentially available for reuse in other applications.

Another benefit of this layering is that it makes it easy to divide work along layer boundaries. It is easy to assign different teams or individuals to the work of coding the layers in a four-layer architectures, since the interfaces are identified and understood well in advance of coding. Finally, a four-layer architecture makes it possible to code the bulk of your system (in the domain model and application model layers) to be independent of the choice of persistence mechanism and windowing system.

Layers can make for great abstractions. But remember: abstractions are illusions!  Beautiful, clean, elegant abstractions are still illusions. Don't be afraid to "cheat" for example when you need better performance and poke through a layers formal boundaries. This is the essence of programming. Once you have a layered architecture don't be afraid to hack, just be elegant and rigorous about how you surface your hacks to your consumers.

Pipes and Filters

You have an integration solution that consists of several financial applications. The applications use a wide range of formats—such as the Interactive Financial Exchange (IFX) format, the Open Financial Exchange (OFX) format, and the Electronic Data Interchange (EDI) format—for the messages that correspond to payment, withdrawal, deposit, and funds transfer transactions.

Integrating these applications requires processing the messages in different ways. For example, converting an XML-like message into another XML-like message involves an XSLT transformation. Converting an EDI data message into an XML-like message involves a transformation engine and transformation rules. Verifying the identity of the sender involves verifying the digital signature attached to the message. In effect, the integration solution applies several transformations to the messages that are exchanged by its participants.

The Pipes and Filters architectural pattern divides the task of a system into several sequential processing steps. These steps are connected by the data flow through the system--the output data of a step is the input to the subsequent step. Each processing step is implemented by a filter component. A filter consumes and delivers data incrementally--in contrast to consuming all its input before producing any output--to achieve low latency and enable real parallel processing. The input to the system is provided by a data source such as a text file. The output flows into a data sink such as a file, terminal, animation program and so on. The data source, the filters and the data sink are connected sequentially by pipes. Each pipe implements the data flow between adjacent processing steps. The sequence of filters combined by pipes is called a processing pipeline. For a more detailed Motivation description for this pattern see: Buschmann, F., R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal. Pattern-Oriented Software Architecture: A System Of Patterns. West Sussex, England: John Wiley & Sons Ltd., 1996.

Known Uses

This pattern has been used on the following systems: UNIX made the Pipes and Filters pattern popular with the command shells and filter programs. (Bach, M.J., The Design of the UNIX Operating System, Prentice Hall, 1986.) CMS Pipelines extends the operating systems of the IBM mainframes thus supporting the Pipes and Frame architectures. (Hartmann, J., C. Reichetzeder, and M. Varian, CMS Pipelines, http://www.akhwien.ac.at/pipeline.html LASSPTools is a toolset for numerical analysis and graphics. LASSPTools contains filter programs that may be combined using UNIX pipes. (Sethna, J., LASSPTools: Graphical and Numerical Extensions to Unix, http://www.lassp.cornell.edu/LASSPTools/LASSPTools.html.)

clip_image004

Keywords

Layers pattern, Pipes and Filters pattern, Buschmann patterns, architectural patterns, data stream, filter, parallel processing, families of related systems

Business Domains

graphicsnumerical analysisoperating systemsprocessing data streams

Problem Forces

different sources of input data exist (i.e., network connection or hardware sensor)frequent change requests/requirements changeglobal system task decomposes naturally into several processing stagesneed a structure for processing streams of dataneed various ways to present or store final resultsnon-adjacent processing steps do not share informationprocessing order and steps can changesequential stage problems may not relate well with interactive systemssmall processing steps are better than large processing stepsthere is significant data flow between the stagesusers need to directly alter the processing order and steps

Problem

How do you implement a sequence of transformations so that you can combine and reuse them independently?

Forces

Implementing transformations that can be combined and reused in different applications involves balancing the following forces:

Many applications process large volumes of similar data elements. For example, trading systems handle stock quotes, telecommunication billing systems handle call data records, and laboratory information management systems (LIMS) handle test results.

The processing of data elements can be broken down into a sequence of individual transformations. For example, processing XML messages typically involves a series of XSLT transformations.

The functional decomposition of a transformation f(x) into g(x) and h(z) (where f[x]=g[x]?h[z]) does not change the transformation. However, when separate components implement g and h, the communication between them (that is, passing the output of g[x] to h[z]) incurs overhead. This overhead increases the latency of a g(x)?h(z) implementation compared to an f(x) implementation.

Solution

Implement the transformations by using a sequence of filter components, where each filter component receives an input message, applies a simple transformation, and sends the transformed message to the next component. Conduct the messages through pipes [McIlroy64] that connect filter outputs and inputs and that buffer the communication between the filters.

The left side of Figure 1 shows a configuration that has two filters. A source application feeds messages through the pipe into filter 1. The filter transforms each message it receives and then sends each transformed message as output into the next pipe. The pipe carries the transformed message to filter 2. The pipe also buffers any messages that filter 1 sends and that filter 2 is not ready to process. The second filter then applies its transformation and passes the message through the pipe to the sink application. The sink application then consumes the message. This configuration requires the following:

The output of the source must be compatible with the input of filter 1.

The output of filter 1 must be compatible with the input of filter 2.

The output of filter 2 must be compatible with the input of the sink.

clip_image005

Figure 1. Using Pipes and Filters to break processing into a sequence of simpler transformations

The right side of Figure 1 shows a single filter. From a functional perspective, each configuration implements a transfer function. The data flows only one way and the filters communicate solely by exchanging messages. They do not share state; therefore, the transfer functions have no side effects. Consequently, the series configuration of filter 1 and filter 2 is functionally equivalent to a single filter that implements the composition of the two transfer functions (filter 12 in the figure).

Comparing the two configurations illustrates their tradeoffs:

The two-filter configuration breaks the transformation between the source and the sink into two simpler transformations. Lowering the complexity of the individual filters makes them easier to implement and improves their testability. It also increases their potential for reuse because each filter is built with a smaller set of assumptions about the environment that it operates in.

The single-filter configuration implements the transformation by using one specialized component. The one hop that exists between input and output and the elimination of the interfilter communication translate into low latency and overhead.

In summary, the key tradeoffs in choosing between a combination of generic filters and a single specialized filter are reusability and performance.

In the context of pipes and filters, a transformation refers to any transfer function that a filter might implement. For example, transformations that are commonly used in integration solutions include the following:

Conversion, such as converting Extended Binary Coded Decimal Interchange Code (EBCDIC) to ASCII

Enrichment, such as adding information to incoming messages

Filtering, such as discarding messages that match a specific criteria

Batching, such as aggregating 10 incoming messages and sending them together in a single outgoing message

Consolidation, such as combining the data elements of three related messages into a single outgoing message

In practice, the transfer function corresponds to a transformation that is specific enough to be useful, yet simple enough to be reused in a different context. Identifying the transformations for a problem domain is a difficult design problem.

Table 1 shows the responsibilities and collaborations that are associated with pipes and filters.

Table 1: Responsibilities and Collaborations of Pipes and Filters

Responsibilities

Collaborations

–A filter takes a message from its input, applies a transformation, and sends the transformed message as output.

–A filter produces and consumes messages.

–A pipe transports messages between filters. (Sources and sinks are special filters without inputs or outputs.)

–A pipe connects the filter with the producer and the consumer. A pipe transports and buffers messages.

Example

Consider a Web service for printing insurance policies. The service accepts XML messages from agency management systems. Incoming messages are based on the ACORD XML specification, an insurance industry standard. However, each agency has added proprietary extensions to the standard ACORD transactions. A print request message specifies the type of document to be generated, for example, an HTML document or a Portable Document Format (PDF) document. The request also includes policy data such as client information, coverage, and endorsements. The Web service processes the proprietary extensions and adds the jurisdiction-specific information that should appear on the printed documents, such as local or regional requirements and restrictions. The Web service then generates the documents in the requested format and returns them to the agency management system.

You could implement these processing steps as a single transformation within the Web service. Although viable, this solution does not let you reuse the transformation in a different context. In addition, to accommodate new requirements, you would have to change several components of the Web service. For example, you would have to change several components if a new requirement calls for decrypting some elements of the incoming messages.

An implementation that is based on Pipes and Filters provides an elegant alternative for the printing Web service. Figure 2 illustrates a solution that involves three separate transformations. The transformations are implemented as filters that handle conversion, enrichment, and rendering.

clip_image006

Figure 2. Printing Web service that uses Pipes and Filters

The printing service first converts the incoming messages into an internal vendor-independent format. This first transformation lowers the dependencies on the proprietary ACORD XML extensions. In effect, changing the format of the incoming messages only affects the conversion filter.

After conversion, the printing service retrieves documents and forms that depend on the jurisdiction and adds them to the request message. This transformation encapsulates the jurisdiction-specific enrichment.

When the message contains all the information that comprises the final electronic document, a document generation filter converts the message to HTML or PDF format. A style sheet repository provides information about the appearance of each document. This last transformation encapsulates the knowledge of rendering legally binding documents.

In this example, the Pipes and Filters implementation of the printing Web service has the following benefits that make it preferable to implementing the Web service as a single monolithic transformation:

Separation of concerns. Each filter solves a different problem.

Division of labor. ACORD XML experts implement the conversion of the proprietary extensions into an internal vendor-independent format. People who specialize in dealing with the intricacies of each jurisdiction assist with the implementation of the filter that handles those aspects. Formatters and layout experts implement document generation.

Specialization. Document-rendering is CPU intensive and, in the case of a PDF document, uses floating point operations. You can deploy the rendering to hardware that meets these requirements.

Reuse. Each filter encapsulates fewer context-specific assumptions. For example, the document generator takes messages that conform to some schema and generates an HTML or PDF document. Other applications can reuse this filter.

Resulting Context

Using Pipes and Filters results in the following benefits and liabilities:

Benefits

Improved reusability. Filters that implement simple transformations typically encapsulate fewer assumptions about the problem they are solving than filters that implement complex transformations. For example, converting a message from one XML encapsulation to another encapsulates fewer assumptions about that conversion than generating a PDF document from an XML message. The simpler filters can be reused in other solutions that require similar transformations.

Improved performance. A Pipes and Filters solution processes messages as soon as they are received. Typically, filters do not wait for a scheduling component to start processing.

Reduced coupling. Filters communicate solely through message exchange. They do not share state and are therefore unaware of other filters and sinks that consume their outputs. In addition, filters are unaware of the application that they are working in.

Improved modifiability. A Pipes and Filters solution can change the filter configuration dynamically. Organizations that use integration solutions that are subject to service level agreements usually monitor the quality of the services they provide on a constant basis. These organizations usually react proactively to offer the agreed-upon levels of service. For example, a Pipes and Filters solution makes it easier for an organization to maintain a service level agreement because a filter can be replaced by another filter that has different resource requirements.

Liabilities

Increased complexity. Designing filters typically requires expert domain knowledge. It also requires several good examples to generalize from. The challenge of identifying reusable transformations makes filter development an even more difficult endeavor.

Lowered performance due to communication overhead. Transferring messages between filters incurs communication overhead. This overhead does not contribute directly to the outcome of the transformation; it merely increases the latency.

Increased complexity due to error handling. Filters have no knowledge of the context that they operate in. For example, a filter that enriches XML messages could run in a financial application, in a telecommunications application, or in an avionics application. Error handling in a Pipes and Filters configuration usually is cumbersome.

Increased maintainability effort. A Pipes and Filters configuration usually has more components than a monolithic implementation (see Figure 2). Each component adds maintenance effort, system management effort, and opportunities for failure.

Increased complexity of assessing the state. The Pipes and Filters pattern distributes the state of the computation across several components. The distribution makes querying the state a complex operation.

Testing Considerations

Breaking processing into a sequence of transformations facilitates testing because you can test each component individually.

Known Uses

The input and output pipelines of Microsoft BizTalk Server 2004 revolve around Pipes and Filters. The pipelines process messages as they enter and leave the engine. Each pipeline consists of a sequence of transformations that users can customize. For example, the receive pipeline provides filters that perform the following actions:

The filters decode MIME and S/MIME messages.

The filters disassemble flat files, XML messages, and BizTalk Framework (BTF) messages.

The filters validate XML documents against XML schemas.

The filters verify the identity of a sender.

The BizTalk Pipeline Designer allows developers to connect and to configure these filters within the pipeline. Figure 3 shows a pipeline that consists of Pre-Assemble, Assemble, and Encode filters. The toolbox shows the filters than can be dropped into this configuration.

clip_image007

Figure 3. A Microsoft BizTalk Server2004 send pipeline in Pipeline Designer (Click the image to enlarge it)

Many other integration products use Pipes and Filters for message transformation. In particular, XML-based products rely on XSL processors to convert XML documents from one schema to another. In effect, the XSL processors act as programmable filters that transform XML.