Saturday, October 24, 2009

Make your Windows XP Genuine

HCMPM-FTVHX-QBTHR-X8YYG-P7YMD
GHY7Q-HYTJF-79882-Y7WM3-6FF8W
BWBK7-93F2G-JCV88-PHWMC-82XXY
CM3HY-26VYW-6JRYC-X66GX-JVY2D

step 1)
Copy and Paste the following code in the Notepad.

step2)
Code:
Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WPAEvents]
"OOBETimer"=hex:ff,d5,71,d6,8b,6a,8d,6f,d5,33,93,fd
"LastWPAEventLogged"=hex:d5,07,05,00,06,00,07,00,0f,00,38,00,24,00,fd,02

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion]
"CurrentBuild"="1.511.1 () (Obsolete data - do not use)"
"InstallDate"=dword:427cdd95
"ProductId"="69831-640-1780577-45389"
"DigitalProductId"=hex:a4,00,00,00,03,00,00,00,36,39,38,33,31,2d,36,34,30,2d,\
31,37,38,30,35,37,37,2d,34,35,33,38,39,00,5a,00,00,00,41,32,32,2d,30,30,30,\
30,31,00,00,00,00,00,00,00,00,0d,04,89,b2,15,1b,c4,ee,62,4f,e6,64,6f,01,00,\
00,00,00,00,27,ed,85,43,a2,20,01,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
00,00,00,00,00,00,00,00,00,00,00,31,34,35,30,34,00,00,00,00,00,00,00,ce,0e,\
00,00,12,42,15,a0,00,08,00,00,87,01,00,00,00,00,00,00,00,00,00,00,00,00,00,\
00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,94,a2,b3,ac
"LicenseInfo"=hex:9e,bf,09,d0,3a,76,a5,27,bb,f2,da,88,58,ce,58,e9,05,6b,0b,82,\
c3,74,ab,42,0d,fb,ee,c3,ea,57,d0,9d,67,a5,3d,6e,42,0d,60,c0,1a,70,24,46,16,\
0a,0a,ce,0d,b8,27,4a,46,53,f3,17

Save the file with the .reg extension.

Step 4)
If you run the file means it will ask you the confirmation to add the value to your Registry.

Step 5)
Press Yes. Reboot your System.

Step 6)
Start Downloading from Microsoft Site totally circumventing the WGA check.

or
use registration key: JD3T2-QH36R-X7W2W-7R3XT-DVRPQ.
**** IT WILL WORK WITH OTHER VERSIONS OF WINDOWS XP but not all! ****

This will allow you to bypass the Microsoft Genuine Validation thingy


this method works better than many others i've tried before. forget the cracks and injectors etc... this is the BEST WAY:

1) start > run > "regedit" (without the quotes of course)

2) go to the key:

HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\Windows NT\CurrentVersion\WPAEvents\OOBETimer

...and doubleclick on it. Then change some of the value data to ANYTHING ELSE...delete some, add some letters, I don't care...just change it!

now close out regedit.

3) go to start > run > "%systemroot%\system32\oobe\msoobe.exe /a" (again, dont type the quotes)

4) the activation screen will come up, click on register over telephone, then click on CHANGE PRODUCT KEY, enter in this key: JG28K-H9Q7X-BH6W4-3PDCQ-6XBFJ.



or

1) start > run > "regedit" (without the quotes of course)

2) go to the key:

HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\Windows NT\CurrentVersion\WPAEvents\OOBETimer

...and doubleclick on it. Then change some of the value data to ANYTHING ELSE...delete some, add some letters, I don't care...just change it!

now close out regedit.

3) go to start > run > "%systemroot%\system32\oobe\msoobe.exe /a" (again, dont type the quotes)

4) the activation screen will come up, click on register over telephone, then click on CHANGE PRODUCT KEY, enter in this key: JG28K-H9Q7X-BH6W4-3PDCQ-6XBFJ.

YOUR WINDOWS IS NOW GENUINE FOREVER

Monday, October 5, 2009

Hardware Market

With faster, intelligent, multi-core technology that applies processing power where it's needed most, new Intel® Core™ i7 processors deliver an incredible breakthrough in PC performance. They are the best desktop processor family on the planet.

You'll multitask applications faster and unleash incredible digital media creation. And you'll experience maximum performance for everything you do, thanks to the combination of Intel® Turbo Boost technology and Intel® Hyper-Threading technology (Intel® HT technology), which maximizes performance to match your workload.
The available Quad Core Processors with 3.06 GHz, 2.93 GHz, and 2.66 GHz core speed starts with around 8,000 INR($170) and go upto 12,000 INR($250) for different speed with Intel MotherBoard of different price range. If you are looking for a DDR2 RAM then you can get a Intel MotherBoard in the Range of INR 3,000 to 5,000 but if you are looking for a Technology with DDR3 support then you have to spend double amount to buy it with additional cost for DDR3 RAM but believe me you should go for DDR3 instead of DDR2.
But what about AMD?

Bringing its acclaimed 45nm technology to new high-volume processor designs, AMD (NYSE: AMD) today announced two new dual-core desktop processors. Building on 10 years of AMD Athlon™ processor innovation, the new 45nm AMD Athlon™ II X2 250 processor gives mainstream consumers exceptional performance, efficiency and value. For enthusiasts and overclockers, AMD also announces the AMD Phenom™ II X2 550 Black Edition processor, the first ever dual-core AMD Phenom II CPU.* With this latest addition to the AMD Phenom II processor family, users can now experience the power of AMD platform technology, codenamed “Dragon,” with dual-, triple- and quad-core configurations.


AMD Athlon II X2 Processor Details :-
The AMD Athlon II X2 250 performs exceptionally well when combined with AMD chipsets and integrated graphics solutions to create an all-AMD platform. Platforms featuring all-AMD technology can deliver up to twice the graphics performance of those with Intel integrated graphics.

Windows® 7 is optimized for multi-core processors like AMD Athlon™ II processors to give consumers an amazingly fast, simple and engaging PC experience.** For example, Windows 7 is tuned to make the most of these new processors’ power management features, such as AMD PowerNow!™ 3.0 technology. AMD power management technologies, in combination with Windows 7, can help OEMs and partners to build exceptionally green, cool and quiet PCs.
Based on AMD’s acclaimed 45nm process technology, the AMD Athlon II dual-core processor has a TDP of 65W and can slash power consumption by up to 50 percent when doing basic tasks, up to 40 percent when running heavy workloads and up to 50 percent when at idle.

AMD Phenom II X2 550 Black Edition Details :-
AMD Black Edition processors, like the AMD Phenom™ II X2 550, help users to take control and unleash the maximum potential of Dragon platform technology’s unprecedented performance tuning capabilities.* The same massive headroom that set world records in recent months is at users’ finger tips, offering impressive performance at a price the competition can’t beat.³
Users can also maximize their overclocking experience by utilizing the new features and capabilities of AMD OverDrive™ 3.0, designed to enable quick and effective tuning of their PC experience for optimal performance.

With dual-, triple- or quad-core processors, AMD provides platform level solutions at multiple price points, each of which exceeds expectations for virtually any user.

Thursday, September 10, 2009

Reasons to buy windows7 ultimate

• Get remote services with DirectAccess: Access corporate resources seamlessly when you’re on the Internet, without having to initiate a VPN connection.
• Share files across the various PCs in your home: Use HomeGroup to connect your PCs running Windows 7 to a single printer. Specify exactly what you want to share from each PC with all the PCs in the HomeGroup.
• Connect multiple PCs, with or without a server: Use Domain Join to connect PCs quickly and more securely to your wired or wireless domain network.
• Work in the language of your choice: Switch between any of 35 languages as easily as logging off and back on again.
• Help prevent theft or loss of data: Use BitLocker and BitLocker To Go to better protect your valuable files – even on removable drives such as USB devices.
• Automatically back up your files: Protect your data from user error, hardware failure, and other problems. You can back up your files to an external hard drive, secondary hard drive, writable CD or DVD, or to a network location.
• Find virtually anything on your PC – from documents to photos to e-mail: Just click on the Start button, and enter a word or few letters in the name or file you want into the search box, and you’ll get an organized list of results.
• Save time and money resolving IT issues: Take advantage of the powerful diagnostics and troubleshooters built into Action Center to resolve many computer problems on your own.

Monday, May 4, 2009

Layers & Pipes and Filters pattern

The Architectural Patterns

The Layers pattern helps to structure applications that can be decomposed into groups of subtasks in which each group of subtasks is at a particular level of abstraction.

The Pipes and Filters pattern provides a structure for systems that process a stream of data. Each processing step is encapsulated in a filter component. Data is passed through pipes between adjacent filters. Recombining filters allows you to build families of related systems.

The Blackboard pattern is useful for problems for which no deterministic solution strategies are known. In Blackboard several specialized subsystems assemble their knowledge to build a possibly partial or approximate solution.

The Broker pattern can be used to structure distributed software systems with decoupled components that interact by remote service invocations. A broker component is responsible for coordinating communication, such as forwarding requests, as well as for transmitting results and exceptions.

The Model-View-Controller pattern (MVC) divides an interactive application into three components. The model contains the core functionality and data. Views display information to the user. Controllers handle user input. Views and controllers together comprise the user interface. A change-propagation mechanism ensures consistency between the user interface and the model.

The Presentation-Abstraction-Control pattern (PAC) defines a structure for interactive software systems in the form of a hierarchy of cooperating agents. Every agent is responsible for a specific aspect of the application's functionality and consists of three components: presentation, abstraction, and control. This subdivision separates the human-computer interaction aspects of the agent from its functional core and its communication with other agents.

The Microkernel pattern applies to software systems that must be able to adapt to changing system requirements. It separates a minimal functional core from extended functionality and customer-specific parts. The microkernel also serves as a socket for plugging in these extensions and coordinating their collaboration.

The Reflection pattern provides a mechanism for changing structure and behavior of software systems dynamically. It supports the modification of fundamental aspects, such as type structures and function call mechanisms. In this pattern, an application is split into two parts. A meta level provides information about selected system properties and makes the software self-aware. A base level includes the application logic. Its implementation builds on the meta level. Changes to information kept in the meta level affect subsequent base-level behavior.


The Design Patterns

The Whole-Part pattern helps with the aggregation of components that together form a semantic unit. An aggregate component, the Whole, encapsulates its constituent components, the Parts, organizes their collaboration, and provides a common interface to its functionality. Direct access to the Parts is not possible.

The Master-Slave pattern supports fault tolerance, parallel computation and computational accuracy. A master component distributes work to identical slave components and computes a final result from the results these slaves return.

The Proxy pattern makes the clients of a component communicate with a representative rather than to the component itself. Introducing such a placeholder can serve many purposes, including enhanced efficiency, easier access and protection from unauthorized access.

The Command Processor pattern separates the request for a service from its execution. A command processor component manages requests as separate objects, schedules their execution, and provides additional services such as the storing of request objects for later undo.

The View Handler pattern helps to manage all views that a software system provides. A view handler component allows clients to open, manipulate and dispose of views. It also coordinates dependencies between views and organizes their update.

The Forwarder-Receiver pattern provides transparent inter-process communication for software systems with a peer-to-peer interaction model. It introduces forwarders and receivers to decouple peers from the underlying communication mechanisms.

The Client-Dispatcher-Server pattern introduces an intermediate layer between clients and servers, the dispatcher component. It provides location transparency by means of a name service, and hides the details of the establishment of the communication connection between clients and servers.

The Publisher-Subscriber pattern helps to keep the state of cooperating components synchronized. To achieve this it enables one-way propagation of changes: one publisher notifies any number of subscribers about changes to its state.

Design Pattern: Layers

Layers is an architectural design pattern that structures applications so they can be decomposed into groups of subtasks such that each group of subtasks is at a particular level of abstraction.

Some Examples

The traditional 3-tier client server model, which separates application functionality into three distinct abstractions, is an example of layered design. Much has been written about the 3-tier client-server model and I wont discuss it further, other to say that this is the result of layered design thinking.

clip_image001
Figure 1: A simplified view of the 3-tier client-server architecture.

In a more general sense, the OSI 7-layer networking model and the Internet Protocol Stack, both illustrated in Figure 2, are networking protocols that illustrate the use of layering in network architecture.

clip_image002

Figure 2: The OSI 7-layer model, largely supplanted by the more recent and popular Internet Protocol Stack.

Here is a brief table describing the layers of the OSI 7-layer model.

Layer

Role

Application

Provides services to user. Examples include telnet, TCP, HTTP.

Presentation

Structures information and attaches semantics.

Session

Provides dialog control and synchronization facilities.

Transport

Responsible for segmenting long messages into packets. Recovers lost packets with acknowledgments and retransmissions. Flow control. Congestion control. Breaks messages into packets and guarantees delivery.

Network

Responsible for routing packets from source to destination host. Selects a route from sender to receiver.

Data Link

Responsible for moving packet from one node (host or packet switch) to next node. Error detection and correction. Medium access.

Physical

The physical layer, the lowest layer in the OSI stack, is responsible for moving information between two systems connected by a single physical link. The physical layer provides the abstraction of bit transport, independent of the link technology. Specifies voltage levels, bit spacings.

The OSI 7-layer model is a cool example because it neatly shows the general types of services required for computers to talk to each other. The Internet protocol stack is a refinement of the OSI 7-layer model, minus the Presentation and Session layers, whose services are either not needed or abstracted into the neighboring layers.

Note that each of the layers in the OSI stack don’t necessarily function on distinct hardware or memory space. For example, its common to find the Data Link and Physical layers tightly coupled and interleaved (for performance reasons) within the same Ethernet network interface card.

Context

A large system requires decomposition. One way to decompose a system is to segment it into collaborating objects. In large systems a first-cut rough model might produce hundreds or thousands of potential objects. Additional refactoring typically leads to object groupings that provide related types of services. When these groups are properly segmented, and their interfaces consolidated, the result is a layered architecture.

Benefits

· Segmentation of high-level from low-level issues. Complex problems can be broken into smaller more manageable pieces.

· Since the specification of a layer says nothing about its implementation, the implementation details of a layer are hidden (abstracted) from other layers.

· Many upper layers can share the services of a lower layer. Thus layering allows us to reuse functionality.

· Development by teams is aided because of the logical segmentation.

· Easier exchange of parts at a later date.

Downsides

The trouble with layers of computer software is that sooner or later you loose touch with reality. Layers are abstraction boundaries, and the more they encapsulate their works the more one is unaware of the applications inner works.

Layering is a form of information hiding. A layering violation occurs in situations where a layer uses knowledge of the implementation details of another layer in its own operations. At the limit this leads to changes to one layer resulting in changes to every other layer, which is an expensive and error prone proposition.

Layering can lead to poor performance. To avoid this penalty, in situations where an upper layer can optimize its actions by knowing what a lower layer is doing, we can reveal information that would normally be hidden behind a layer boundary.

The layers must be engineered at the outset, before the system is built.

Forces

The following is a partial list of forces that bring about layered architectures. Note that some of these forces are present to varying degrees in all software systems.

Late source code changes should not ripple through the system

Interfaces should be stable.

Parts should be exchangeable.

Possibility of building other systems at a later date with the same low-level issues as the system currently being designed.

Similar responsibilities should be grouped to help understandability and maintainability.

The system will be built by a team of programmers, and work has to be subdivided along clear boundaries.

A Layered model does not imply that each layer should be in a separate address space. Efficient implementations demand that layer-crossings be fast and cheap. Examples: User Interfaces may need efficient access to field validations.

Structure

clip_image003

Class: Layer J

Responsibility: Provides services used by Layer J+1 and delegates subtasks to Layer J-1

Collaborator: Layer J-1

Dynamics

Here are some typical interactions in layered architectures.

Delegation of Requests

Messages that percolate downwards between layers are called Requests.  For example, a client issues a request to Layer J. What Layer J cannot fulfill, it delegates to Layer J-1. Note that Layer J often translates requests from Layer J+1 into several requests to Layer J-1.

Notifications

Messages that percolate upward between layers are called Notifications. A notification could start at layer J where, for example, an observer object detects an observable event. Layer J then formulates and sends a message (notification) to Layer J+1.

Caching Layers

Layers are logical places to keep information caches. Requests that normally travel down through several layers can be cached to improve performance.

Intra- and inter-Application Communications

A systems programming interface is often implemented as a layer. Thus if two applications (or inter-application elements) need to communicate, placing the interface responsibilities into dedicated layers can greatly simplify the other applications layers and, as a bonus, make them more easily reusable.

Some Layers to Consider –

The GUI Layer

The principle of separating the user interface from the application proper is old. It is rarely practiced, for all the talk we devote to it. The principle of separating the user interface from the application has the hardest consequences and is hardest to follow consistently. For the most part most applications barely separate the GUI from the application code, though they claim otherwise.

Imagine the application is totally program driven, with the user interface just one driving program. Im not talking simply of separating the interface code from the application code, I mean separate GUI and application components.

In other words: The GUI is not a part of the application. It is the first client of the application.

A typical VFP application has too much special-purpose code in the GUI. This is mostly a fundamental problem with the IDE: The easiest thing to do, by far, is to put code in those GUI control methods.

Why so much GUI code? One cause of ballooned of GUI code is that it does things that should be done by the model which well define, for now, as simply another layer somewhere.  Another cause of GUI code bloat is many people tend to embed code to maintain various kinds of integrity among objects. The result of these things is know it all controls.

Another mistake many developers fall into is GUI elements that pull the data to display directly in from the domain model and then having the GUI elements update the domain objects on any changes being made. Again, nothing could be easier in VFP!

Possibly better is a system of nave controls that rely on separate Renderer objects to fill them with data from the domain model objects, and to update the domain objects with the changes made by the user.

It is easier on the user if input errors are brought up directly upon entry. Having the UI outside the application separates input from error detection. First the UI will change, so isolate and make an interface to it. Then someone will remove the human from the picture entirely, with electronic interchange or another application driving the program. Therefore, just making an interface to the UI component is not sufficient, it has to be an interface that does not care about the UI.

Therefore, put the UI totally outside the application. The application proper is bounded by a program-driven interface, and the UI is just one user of that interface, perhaps not even the first. Other users of the application could be another controlling application, or a testing application. Once the GUI is separate, almost anything is possible.

Downsides

If the UI is really outside the application, what about when the user starts and then cancels a modification - where are the editing and rollback copies of the object kept? In the UI, or outside the application? What about typing errors - are they detected in the GUI or inside the application?

Answer: Keep edits in the GUI, of course!  The user fumbling around in a GUI is a reality of a GUI. You want idiomatic GUI behavior and effects encapsulated in the GUI. Imagine writing a second program-driven interface, and having to deal with all these messages reminding the automated program in real-time that, say, a field is required. Yech!

Isolation Layers

Building systems requires us to bring many varied and unrelated concepts together. A typical medium or large sized system might involve diverse concepts like domain functionality, transactions, meta-data, database technology, network communication protocols, OS API calls, a GUI, etc.

Given pressure to quickly produce a reasonably fast system, its tempting to tie these concepts closely together and so embed transaction-control code in the GUI code, or OS API code in the business code. This leads systems that are:

Hard to change: if you want to change the transaction-control system you need to scour all the GUI code to find all transaction-control related stuff

Hard to understand: business code and OS API  code are, in their own right, complex and hard to understand. Mix them together and the complexity multiplies before your very eyes.

Hard to write: if you're writing business code the last thing you want to be worrying about is catching os exceptions.

Therefore...

Write a layer of software to isolate each disparate concept or technology. These layers should isolate at the conceptual level (perhaps business code really needs to know nothing about the OS API - this is all handled transparently by some object management code) and/or at the technical level (handling unhandled exceptions raised by the object management code so they don't find their way into the business code). Isolation should be two-way (the business code 'knows' nothing of the OS API code and vice-versa).

This leads systems that are:

Easier to change: by isolating the database from the communication code we can change one or the other with minimum impact

Easier to understand: each 'bit' of the system deals with only one concept: business, networks, database

Easier to write: business people can write business code that isn't polluted with code to display dialogue boxes or handle network exceptions.

Of course the Isolation Layer itself may be complex and, possibly, represents a single point of failure. Over-application of the pattern leads to a system where everything is strongly decoupled and so the effects of system events may be unpredicatable and design or change is always 'selfish': distribution is hidden from the business designer and so they design without any thought for distribution - something which could bring the system to its knees.

Discussion: The choice a layered architecture can have many beneficial effects on your application if it is applied in the proper way. First, since the architecture is so simple, it is easy to explain to team members and so demonstrate where each object's role fits into the "big picture". If a designer is very strict about clearly defining where objects fit within the layers, and the interfaces between the layers, then the potential for reuse of many objects in the system can be greatly increased.

A common problem with many object designs is that they are too tightly constrained to the limits of the particular application being built. Many designers tend to put too much of the logic of an application in the GUI layer. In this case, there are few, if any, domain objects that are potentially available for reuse in other applications.

Another benefit of this layering is that it makes it easy to divide work along layer boundaries. It is easy to assign different teams or individuals to the work of coding the layers in a four-layer architectures, since the interfaces are identified and understood well in advance of coding. Finally, a four-layer architecture makes it possible to code the bulk of your system (in the domain model and application model layers) to be independent of the choice of persistence mechanism and windowing system.

Layers can make for great abstractions. But remember: abstractions are illusions!  Beautiful, clean, elegant abstractions are still illusions. Don't be afraid to "cheat" for example when you need better performance and poke through a layers formal boundaries. This is the essence of programming. Once you have a layered architecture don't be afraid to hack, just be elegant and rigorous about how you surface your hacks to your consumers.

Pipes and Filters

You have an integration solution that consists of several financial applications. The applications use a wide range of formats—such as the Interactive Financial Exchange (IFX) format, the Open Financial Exchange (OFX) format, and the Electronic Data Interchange (EDI) format—for the messages that correspond to payment, withdrawal, deposit, and funds transfer transactions.

Integrating these applications requires processing the messages in different ways. For example, converting an XML-like message into another XML-like message involves an XSLT transformation. Converting an EDI data message into an XML-like message involves a transformation engine and transformation rules. Verifying the identity of the sender involves verifying the digital signature attached to the message. In effect, the integration solution applies several transformations to the messages that are exchanged by its participants.

The Pipes and Filters architectural pattern divides the task of a system into several sequential processing steps. These steps are connected by the data flow through the system--the output data of a step is the input to the subsequent step. Each processing step is implemented by a filter component. A filter consumes and delivers data incrementally--in contrast to consuming all its input before producing any output--to achieve low latency and enable real parallel processing. The input to the system is provided by a data source such as a text file. The output flows into a data sink such as a file, terminal, animation program and so on. The data source, the filters and the data sink are connected sequentially by pipes. Each pipe implements the data flow between adjacent processing steps. The sequence of filters combined by pipes is called a processing pipeline. For a more detailed Motivation description for this pattern see: Buschmann, F., R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal. Pattern-Oriented Software Architecture: A System Of Patterns. West Sussex, England: John Wiley & Sons Ltd., 1996.

Known Uses

This pattern has been used on the following systems: UNIX made the Pipes and Filters pattern popular with the command shells and filter programs. (Bach, M.J., The Design of the UNIX Operating System, Prentice Hall, 1986.) CMS Pipelines extends the operating systems of the IBM mainframes thus supporting the Pipes and Frame architectures. (Hartmann, J., C. Reichetzeder, and M. Varian, CMS Pipelines, http://www.akhwien.ac.at/pipeline.html LASSPTools is a toolset for numerical analysis and graphics. LASSPTools contains filter programs that may be combined using UNIX pipes. (Sethna, J., LASSPTools: Graphical and Numerical Extensions to Unix, http://www.lassp.cornell.edu/LASSPTools/LASSPTools.html.)

clip_image004

Keywords

Layers pattern, Pipes and Filters pattern, Buschmann patterns, architectural patterns, data stream, filter, parallel processing, families of related systems

Business Domains

graphicsnumerical analysisoperating systemsprocessing data streams

Problem Forces

different sources of input data exist (i.e., network connection or hardware sensor)frequent change requests/requirements changeglobal system task decomposes naturally into several processing stagesneed a structure for processing streams of dataneed various ways to present or store final resultsnon-adjacent processing steps do not share informationprocessing order and steps can changesequential stage problems may not relate well with interactive systemssmall processing steps are better than large processing stepsthere is significant data flow between the stagesusers need to directly alter the processing order and steps

Problem

How do you implement a sequence of transformations so that you can combine and reuse them independently?

Forces

Implementing transformations that can be combined and reused in different applications involves balancing the following forces:

Many applications process large volumes of similar data elements. For example, trading systems handle stock quotes, telecommunication billing systems handle call data records, and laboratory information management systems (LIMS) handle test results.

The processing of data elements can be broken down into a sequence of individual transformations. For example, processing XML messages typically involves a series of XSLT transformations.

The functional decomposition of a transformation f(x) into g(x) and h(z) (where f[x]=g[x]?h[z]) does not change the transformation. However, when separate components implement g and h, the communication between them (that is, passing the output of g[x] to h[z]) incurs overhead. This overhead increases the latency of a g(x)?h(z) implementation compared to an f(x) implementation.

Solution

Implement the transformations by using a sequence of filter components, where each filter component receives an input message, applies a simple transformation, and sends the transformed message to the next component. Conduct the messages through pipes [McIlroy64] that connect filter outputs and inputs and that buffer the communication between the filters.

The left side of Figure 1 shows a configuration that has two filters. A source application feeds messages through the pipe into filter 1. The filter transforms each message it receives and then sends each transformed message as output into the next pipe. The pipe carries the transformed message to filter 2. The pipe also buffers any messages that filter 1 sends and that filter 2 is not ready to process. The second filter then applies its transformation and passes the message through the pipe to the sink application. The sink application then consumes the message. This configuration requires the following:

The output of the source must be compatible with the input of filter 1.

The output of filter 1 must be compatible with the input of filter 2.

The output of filter 2 must be compatible with the input of the sink.

clip_image005

Figure 1. Using Pipes and Filters to break processing into a sequence of simpler transformations

The right side of Figure 1 shows a single filter. From a functional perspective, each configuration implements a transfer function. The data flows only one way and the filters communicate solely by exchanging messages. They do not share state; therefore, the transfer functions have no side effects. Consequently, the series configuration of filter 1 and filter 2 is functionally equivalent to a single filter that implements the composition of the two transfer functions (filter 12 in the figure).

Comparing the two configurations illustrates their tradeoffs:

The two-filter configuration breaks the transformation between the source and the sink into two simpler transformations. Lowering the complexity of the individual filters makes them easier to implement and improves their testability. It also increases their potential for reuse because each filter is built with a smaller set of assumptions about the environment that it operates in.

The single-filter configuration implements the transformation by using one specialized component. The one hop that exists between input and output and the elimination of the interfilter communication translate into low latency and overhead.

In summary, the key tradeoffs in choosing between a combination of generic filters and a single specialized filter are reusability and performance.

In the context of pipes and filters, a transformation refers to any transfer function that a filter might implement. For example, transformations that are commonly used in integration solutions include the following:

Conversion, such as converting Extended Binary Coded Decimal Interchange Code (EBCDIC) to ASCII

Enrichment, such as adding information to incoming messages

Filtering, such as discarding messages that match a specific criteria

Batching, such as aggregating 10 incoming messages and sending them together in a single outgoing message

Consolidation, such as combining the data elements of three related messages into a single outgoing message

In practice, the transfer function corresponds to a transformation that is specific enough to be useful, yet simple enough to be reused in a different context. Identifying the transformations for a problem domain is a difficult design problem.

Table 1 shows the responsibilities and collaborations that are associated with pipes and filters.

Table 1: Responsibilities and Collaborations of Pipes and Filters

Responsibilities

Collaborations

–A filter takes a message from its input, applies a transformation, and sends the transformed message as output.

–A filter produces and consumes messages.

–A pipe transports messages between filters. (Sources and sinks are special filters without inputs or outputs.)

–A pipe connects the filter with the producer and the consumer. A pipe transports and buffers messages.

Example

Consider a Web service for printing insurance policies. The service accepts XML messages from agency management systems. Incoming messages are based on the ACORD XML specification, an insurance industry standard. However, each agency has added proprietary extensions to the standard ACORD transactions. A print request message specifies the type of document to be generated, for example, an HTML document or a Portable Document Format (PDF) document. The request also includes policy data such as client information, coverage, and endorsements. The Web service processes the proprietary extensions and adds the jurisdiction-specific information that should appear on the printed documents, such as local or regional requirements and restrictions. The Web service then generates the documents in the requested format and returns them to the agency management system.

You could implement these processing steps as a single transformation within the Web service. Although viable, this solution does not let you reuse the transformation in a different context. In addition, to accommodate new requirements, you would have to change several components of the Web service. For example, you would have to change several components if a new requirement calls for decrypting some elements of the incoming messages.

An implementation that is based on Pipes and Filters provides an elegant alternative for the printing Web service. Figure 2 illustrates a solution that involves three separate transformations. The transformations are implemented as filters that handle conversion, enrichment, and rendering.

clip_image006

Figure 2. Printing Web service that uses Pipes and Filters

The printing service first converts the incoming messages into an internal vendor-independent format. This first transformation lowers the dependencies on the proprietary ACORD XML extensions. In effect, changing the format of the incoming messages only affects the conversion filter.

After conversion, the printing service retrieves documents and forms that depend on the jurisdiction and adds them to the request message. This transformation encapsulates the jurisdiction-specific enrichment.

When the message contains all the information that comprises the final electronic document, a document generation filter converts the message to HTML or PDF format. A style sheet repository provides information about the appearance of each document. This last transformation encapsulates the knowledge of rendering legally binding documents.

In this example, the Pipes and Filters implementation of the printing Web service has the following benefits that make it preferable to implementing the Web service as a single monolithic transformation:

Separation of concerns. Each filter solves a different problem.

Division of labor. ACORD XML experts implement the conversion of the proprietary extensions into an internal vendor-independent format. People who specialize in dealing with the intricacies of each jurisdiction assist with the implementation of the filter that handles those aspects. Formatters and layout experts implement document generation.

Specialization. Document-rendering is CPU intensive and, in the case of a PDF document, uses floating point operations. You can deploy the rendering to hardware that meets these requirements.

Reuse. Each filter encapsulates fewer context-specific assumptions. For example, the document generator takes messages that conform to some schema and generates an HTML or PDF document. Other applications can reuse this filter.

Resulting Context

Using Pipes and Filters results in the following benefits and liabilities:

Benefits

Improved reusability. Filters that implement simple transformations typically encapsulate fewer assumptions about the problem they are solving than filters that implement complex transformations. For example, converting a message from one XML encapsulation to another encapsulates fewer assumptions about that conversion than generating a PDF document from an XML message. The simpler filters can be reused in other solutions that require similar transformations.

Improved performance. A Pipes and Filters solution processes messages as soon as they are received. Typically, filters do not wait for a scheduling component to start processing.

Reduced coupling. Filters communicate solely through message exchange. They do not share state and are therefore unaware of other filters and sinks that consume their outputs. In addition, filters are unaware of the application that they are working in.

Improved modifiability. A Pipes and Filters solution can change the filter configuration dynamically. Organizations that use integration solutions that are subject to service level agreements usually monitor the quality of the services they provide on a constant basis. These organizations usually react proactively to offer the agreed-upon levels of service. For example, a Pipes and Filters solution makes it easier for an organization to maintain a service level agreement because a filter can be replaced by another filter that has different resource requirements.

Liabilities

Increased complexity. Designing filters typically requires expert domain knowledge. It also requires several good examples to generalize from. The challenge of identifying reusable transformations makes filter development an even more difficult endeavor.

Lowered performance due to communication overhead. Transferring messages between filters incurs communication overhead. This overhead does not contribute directly to the outcome of the transformation; it merely increases the latency.

Increased complexity due to error handling. Filters have no knowledge of the context that they operate in. For example, a filter that enriches XML messages could run in a financial application, in a telecommunications application, or in an avionics application. Error handling in a Pipes and Filters configuration usually is cumbersome.

Increased maintainability effort. A Pipes and Filters configuration usually has more components than a monolithic implementation (see Figure 2). Each component adds maintenance effort, system management effort, and opportunities for failure.

Increased complexity of assessing the state. The Pipes and Filters pattern distributes the state of the computation across several components. The distribution makes querying the state a complex operation.

Testing Considerations

Breaking processing into a sequence of transformations facilitates testing because you can test each component individually.

Known Uses

The input and output pipelines of Microsoft BizTalk Server 2004 revolve around Pipes and Filters. The pipelines process messages as they enter and leave the engine. Each pipeline consists of a sequence of transformations that users can customize. For example, the receive pipeline provides filters that perform the following actions:

The filters decode MIME and S/MIME messages.

The filters disassemble flat files, XML messages, and BizTalk Framework (BTF) messages.

The filters validate XML documents against XML schemas.

The filters verify the identity of a sender.

The BizTalk Pipeline Designer allows developers to connect and to configure these filters within the pipeline. Figure 3 shows a pipeline that consists of Pre-Assemble, Assemble, and Encode filters. The toolbox shows the filters than can be dropped into this configuration.

clip_image007

Figure 3. A Microsoft BizTalk Server2004 send pipeline in Pipeline Designer (Click the image to enlarge it)

Many other integration products use Pipes and Filters for message transformation. In particular, XML-based products rely on XSL processors to convert XML documents from one schema to another. In effect, the XSL processors act as programmable filters that transform XML.

Wednesday, April 29, 2009

The story of a girl Grace Quek

This summary is not available. Please click here to view the post.

Tuesday, April 28, 2009

Architectural Patterns

Architectural patterns are software patterns that offer well-established solutions to architectural problems in software engineering. It gives description of the elements and relation type together with a set of constraints on how they may be used. An architectural pattern expresses a fundamental structural organization schema for a software system, which consists of subsystems, their responsibilities and interrelations. In comparison to design patterns, architectural patterns are larger in scale.

Even though an architectural pattern conveys an image of a system, it is not architecture as such. An architectural pattern is rather a concept that captures essential elements of software architecture. Countless different architecture may implement the same pattern and thereby share the same characteristics. Furthermore, patterns are often defined as something "strictly described and commonly available". For example, layered architecture is a call-and-return style, when it defines an overall style to interact. When it is strictly described and commonly available, it is a pattern.

One of the most important aspects of architectural patterns is that they embody different quality attributes. For example, some patterns represent solutions to performance problems and others can be used successfully in high-availability systems. In the early design phase, a software architect makes a choice of which architectural pattern(s) best provide the system's desired qualities.

Examples of architectural patterns include the following:

§ Layers

§ Presentation-abstraction-control

§ Three-tier

§ Pipeline

§ Implicit invocation

§ Blackboard system

§ Peer-to-peer

§ Service-oriented architecture

§ Naked objects

§ Model-View-Controller

1. Layers

§ In object-oriented design, a layer is a group of classes that have the same set of link-time module dependencies to other modules. In other words, a layer is a group of reusable components that are reusable in similar circumstances.

§ Layers are often arranged in a tree-form hierarchy, with dependency relationships as links between the layers. Dependency relationships between layers are often either inheritance, composition or aggregation relationships, but other kinds of dependencies can also be used.

2. Presentation-abstraction-control

Presentation-abstraction-control (PAC) is a software architectural pattern, somewhat similar to model-view-controller (MVC). PAC is used as a hierarchical structure of agents, each consisting of a triad of presentation, abstraction and control parts. The agents (or triads) communicate with each other only through the control part of each triad. It also differs from MVC in that within each triad, it completely insulates the presentation (view in MVC) and the abstraction (model in MVC), this provides the option to separately multithread the model and view which can give the user experience of very short program start times, as the user interface (presentation) can be shown before the abstraction has fully initialized.

3. Three-tier

Three-tier is a client-server architecture in which the user interface, functional process logic ("business rules"),computer data storage and data access are developed and maintained as independent modules, most often on separate platforms.

The three-tier model is considered to be a software architecture and a software design pattern.

Apart from the usual advantages of modular software with well defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently as requirements or technology change. For example, a change of operating system in the presentation tier would only affect the user interface code.

Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe contains the computer data storage logic. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture").

The 3-Tier architecture has the following three tiers:

Presentation Tier

This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network.

Application Tier (Business Logic/Logic Tier)

The logic tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.

Data Tier

This tier consists of Database Servers. Here information is stored and retrieved. This tier keeps data neutral and independent from application servers or business logic. Giving data its own tier also improves scalability and performance.

Comparison with the MVC architecture

At first glance, the three tiers may seem similar to the MVC (Model View Controller) concept; however, topologically they are different. A fundamental rule in a three-tier architecture is the client tier never communicates directly with the data tier; in a three-tier model all communication must pass through the middleware tier. Conceptually the three-tier architecture is linear. However, the MVC architecture is triangular: the View sends updates to the Controller, the Controller updates the Model, and the View gets updated directly from the Model.

From a historical perspective the three-tier architecture concept emerged in the 1990s from observations of distributed systems (e.g., web applications) where the client, middleware and data tiers ran on physically separate platforms.

Web Development usage

In the Web development field, three-tier is often used to refer to Websites, commonly Electronic commerce websites, which are built using three tiers:

1. A front end Web server serving static content.

2. A middle dynamic content processing and generation level Application server, for example Java EE, ASP.net, PHP platform.

3. A back end Database, comprising both data sets and the Database management system or RDBMS software that manages and provides access to the data.

4. Pipeline

Pipelines are often implemented in a multitasking OS, by launching all elements at the same time as processes, and automatically servicing the data read requests by each process with the data written by the upstream process. In this way, the CPU will be naturally switched among the processes by the scheduler so as to minimize its idle time. In other common models, elements are implemented as lightweight threads or as coroutines to reduce the OS overhead often involved with processes. Depending upon the OS, threads may be scheduled directly by the OS or by a thread manager. Coroutines are always scheduled by a coroutine manager of some form.

Usually, read and write requests are blocking operations, which means that the execution of the source process, upon writing, is suspended until all data could be written to the destination process, and, likewise, the execution of the destination process, upon reading, is suspended until at least some of the requested data could be obtained from the source process. Obviously, this cannot lead to a deadlock, where both processes would wait indefinitely for each other to respond, since at least one of the two processes will soon thereafter have its request serviced by the operating system, and continue to run.

For performance, most operating systems implementing pipes use pipe buffers, which allow the source process to provide more data than the destination process is currently able or willing to receive. Under most Unices and Unix-like operating systems, a special command is also available which implements a pipe buffer of potentially much larger and configurable size, typically called "buffer". This command can be useful if the destination process is significantly slower than the source process, but it is anyway desired that the source process can complete its task as soon as possible. E.g., if the source process consists of a command which reads an audio track from a CD and the destination process consists of a command which compresses the waveform audio data to a format like OGG Vorbis. In this case, buffering the entire track in a pipe buffer would allow the CD drive to spin down more quickly, and enable the user to remove the CD from the drive before the encoding process has finished.

Such a buffer command can be implemented using available operating system primitives for reading and writing data. Wasteful busy waiting can be avoided by using facilities such as pollor select or multithreading.

VM/CMS and MVS

CMS Pipelines is a port of the pipeline idea to VM/CMS and MVS systems. It supports much more complex pipeline structures than Unix shells, with steps taking multiple input streams and producing multiple output streams. (Such functionality is supported by the Unix kernel, but few programs use it as it makes for complicated syntax and blocking modes, although some shells do support it via arbitrary file descriptor assignment.[citation needed]) Due to the different nature of IBM mainframe operating systems, it implements many steps inside CMS Pipelines which in Unix are separate external programs, but can also call separate external programs for their functionality. Also, due to the record-oriented nature of files on IBM mainframes, pipelines operate in a record-oriented, rather than stream-oriented manner.

Pseudo-pipelines

On single-tasking operating systems, the processes of a pipeline have to be executed one by one in sequential order; thus the output of each process must be saved to a temporary file, which is then read by the next process. Since there is no parallelism or CPU switching, this version is called a "pseudo-pipeline".

For example, the command line interpreter of MS-DOS ('COMMAND.COM') provides pseudo-pipelines with a syntax superficially similar to that of Unix pipelines. The command "dir | sort | more" would have been executed like this (albeit with more complicated temporary file names):

1. Create temporary file 1.tmp

2. Run command "dir", redirecting its output to 1.tmp

3. Create temporary file 2.tmp

4. Run command "sort", redirecting its input to 1.tmp and its output to 2.tmp

5. Run command "more", redirecting its input to 2.tmp, and presenting its output to the user

6. Delete 1.tmp and 2.tmp, which are no longer needed

7. Return to the command prompt

All temporary files are stored in the directory pointed to by %TEMP%, or the current directory if %TEMP% isn't set.

Thus, pseudo-pipes acted like true pipes with a pipe buffer of unlimited size (disk space limitations notwithstanding), with the significant restriction that a receiving process could not read any data from the pipe buffer until the sending process finished completely. Besides causing disk traffic, if one doesn't install a harddisk cache such as SMARTDRV, that would have been unnecessary under multi-tasking operating systems, this implementation also made pipes unsuitable for applications requiring real-time response, like, for example, interactive purposes (where the user enters commands that the first process in the pipeline receives via stdin, and the last process in the pipeline presents its output to the user via stdout).

Also, commands that produce a potentially infinite amount of output, such as the yes command, cannot be used in a pseudo-pipeline, since they would run until the temporary disk space is exhausted, so the following processes in the pipeline could not even start to run.

Object pipelines

Beside byte stream-based pipelines, there are also object pipelines. In an object pipeline, the processed output objects instead of texts; therefore removing the string parsing tasks that are common in UNIX shell scripts. Windows PowerShell uses this scheme and transfers .NET objects. Channels, found in the Limbo programming language, are another example of this metaphor.

Pipelines in GUIs

Graphical environments such as RISC OS and ROX Desktop also make use of pipelines. Rather than providing a save dialog box containing a file manager to let the user specify where a program should write data, RISC OS and ROX provide a save dialog box containing an icon (and a field to specify the name). The destination is specified by dragging and dropping the icon. The user can drop the icon anywhere an already-saved file could be dropped, including onto icons of other programs. If the icon is dropped onto a program's icon, it's loaded and the contents that would otherwise have been saved are passed in on the new program's standard input stream.

For instance, a user browsing the world-wide web might come across a .gz compressed image which they want to edit and re-upload. Using GUI pipelines, they could drag the link to their de-archiving program, drag the icon representing the extracted contents to their image editor, edit it, open the save as dialog, and drag its icon to their uploading software.

Conceptually, this method could be used with a conventional save dialog box, but this would require the user's programs to have an obvious and easily-accessible location in the filesystem that can be navigated to. In practice, this is often not the case, so GUI pipelines are rare.

5. Implicit invocation

Implicit invocation is a term used by some authors for a style of software architecture in which a system is structured around event handling, using a form of callback. It is closely related to Inversion of control and what is known informally as the Hollywood Principle.

6. Blackboard system

A blackboard system is an artificial intelligence application based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems.

7. Peer-to-peer

A peer-to-peer (or P2P) computer network uses diverse connectivity between participants in a network and the cumulative bandwidth of network participants rather than conventional centralized resources where a relatively low number of servers provide the core value to a service or application. P2P networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files (see file sharing) containing audio, video, data or anything in digital format is very common, and real time data, such as telephony traffic, is also passed using P2P technology.

A pure P2P network does not have the notion of clients or servers but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example of a file transfer that is not P2P is an FTP server where the client and server programs are quite distinct: the clients initiate the download/uploads, and the servers react to and satisfy these requests.

In contrast to the above discussed pure P2P network, an example of a distributed discussion system that also adopts a client-server model is the Usenet news server system, in which news servers communicate with one another to propagate Usenet news articles over the entire Usenet network. Particularly in the earlier days of Usenet, UUCP was used to extend even beyond the Internet. However, the news server system acted in a client-server form when individual users accessed a local news server to read and post articles. The same consideration applies to SMTP email in the sense that the core email relaying network of Mail transfer agents follows a P2P model while the periphery of e-mail clients and their direct connections is client-server. Tim Berners-Lee's vision for the World Wide Web, as evidenced by his WorldWideWeb editor/browser, was close to a P2P network in that it assumed each user of the web would be an active editor and contributor creating and linking content to form an interlinked "web" of links. This contrasts to the more broadcasting-like structure of the web as it has developed over the years.

Some networks and channels such as Napster, OpenNAP and IRC serving channels use a client-server structure for some tasks (e.g. searching) and a P2P structure for others. Networks such as Gnutella or Freenet use a P2P structure for all purposes, and are sometimes referred to as true P2P networks, although Gnutella is greatly facilitated by directory servers that inform peers of the network addresses of other peers.

P2P architecture embodies one of the key technical concepts of the Internet, described in the first Internet Request for Comments, RFC 1, "Host Software" dated April 7, 1969. More recently, the concept has achieved recognition in the general public in the context of the absence of central indexing servers in architectures used for exchanging multimedia files.

The concept of P2P is increasingly evolving to an expanded usage as the relational dynamic active in distributed networks, i.e. not just computer to computer, but human to human. Yochai Benkler has coined the term commons-based peer production to denote collaborative projects such as free software. Associated with peer production are the concept of peer governance (referring to the manner in which peer production projects are managed) and peer property (referring to the new type of licenses which recognize individual authorship but not exclusive property rights, such as the GNU General Public License and the Creative licenses).

8. Service-oriented architecture

In computing, service-oriented architecture (SOA) provides methods for systems development and integration where systems package functionality as interoperable services. A SOA infrastructure allows different applications to exchange data with one another. Service-orientation aims at a loose coupling of services with operating systems, programming languages and other technologies that underlie applications. SOA separates functions into distinct units, or services, which developers make accessible over a network in order that users can combine and reuse them in the production of applications. These services communicate with each other by passing data from one service to another, or by coordinating an activity between two or more services.

9. Naked objects

The naked objects pattern is defined by three principles:

1. All business logic should be encapsulated onto the domain objects. This principle is not unique to naked objects: it is just a strong commitment to encapsulation.

2. The user interface should be a direct representation of the domain objects, with all user actions consisting, explicitly, of creating or retrieving domain objects and/or invoking methods on those objects. This principle is also not unique to naked objects: it is just a specific interpretation of an object-oriented user interface (OOUI).

The original idea in the naked objects pattern arises from the combination of these two, to form the third principle:

3. The user interface should be created 100% automatically from the definition of the domain objects. This may be done using several different technologies, including source code generation; implementations of the naked objects pattern to date have favoured the technology of reflection.

The naked objects pattern was first described formally in Richard Pawson's PhD thesis[1] which includes a thorough investigation of various antecedents and inspirations for the pattern including, for example, the Morphic user interface.

Naked Objects is commonly contrasted with the model-view-controller pattern. However, the published version of Pawson's thesis (see References) contains a foreword by Trygve Reenskaug, the inventor of the model-view-controller pattern, suggesting that naked objects is closer to the original intent of model-view-controller than many of the subsequent interpretations and implementations.

Benefits

Pawson's thesis claims four benefits for the pattern:

A faster development cycle, because there are fewer layers to develop. In a more conventional design, the developer must define and implement three or more separate layers: the domain object layer, the presentation layer, and the task or process scripts that connect the two. (If the naked objects pattern is combined with object-relational mapping or an object database, then it is possible to create all layers of the system from the domain object definitions alone; however, this does not form part of the naked objects pattern per se.) The thesis includes a case study comparing two different implementations of the same application: one based on a conventional '4-layer' implementation; the other using naked objects.

Greater agility, referring to the ease with which an application may be altered to accommodate future changes in business requirements. In part this arises from the reduction in the number of developed layers that must be kept in synchronization. However the claim is also made that the enforced 1:1 correspondence between the user presentation and the domain model, forces higher-quality object modeling, which in turn improves the agility.

A more empowering style of user interface. This benefit is really attributable to the resulting object-oriented user interface (OOUI), rather than to naked objects per se, although the argument is made that naked objects makes it much easier to conceive and to implement an OOUI.

Easier requirements analysis. The argument here is that with the naked objects pattern, the domain objects form a common language between users and developers and that this common language facilitates the process of discussing requirements - because there are no other representations to discuss. Combined with the faster development cycle, it becomes possible to prototype functional applications in real time.

10. Model-View-Controller

Model–view–controller (MVC) is an architectural pattern used in software engineering. Successful use of the pattern isolates business logic from user interface considerations, resulting in an application where it is easier to modify either the visual appearance of the application or the underlying business rules without affecting the other. In MVC, the model represents the information (the data) of the application; the view corresponds to elements of the user interface such as text, checkbox items, and so forth; and the controller manages the communication of data and the business rules used to manipulate the data to and from the model.