Wednesday, April 29, 2009

The story of a girl Grace Quek

This summary is not available. Please click here to view the post.

Tuesday, April 28, 2009

Architectural Patterns

Architectural patterns are software patterns that offer well-established solutions to architectural problems in software engineering. It gives description of the elements and relation type together with a set of constraints on how they may be used. An architectural pattern expresses a fundamental structural organization schema for a software system, which consists of subsystems, their responsibilities and interrelations. In comparison to design patterns, architectural patterns are larger in scale.

Even though an architectural pattern conveys an image of a system, it is not architecture as such. An architectural pattern is rather a concept that captures essential elements of software architecture. Countless different architecture may implement the same pattern and thereby share the same characteristics. Furthermore, patterns are often defined as something "strictly described and commonly available". For example, layered architecture is a call-and-return style, when it defines an overall style to interact. When it is strictly described and commonly available, it is a pattern.

One of the most important aspects of architectural patterns is that they embody different quality attributes. For example, some patterns represent solutions to performance problems and others can be used successfully in high-availability systems. In the early design phase, a software architect makes a choice of which architectural pattern(s) best provide the system's desired qualities.

Examples of architectural patterns include the following:

§ Layers

§ Presentation-abstraction-control

§ Three-tier

§ Pipeline

§ Implicit invocation

§ Blackboard system

§ Peer-to-peer

§ Service-oriented architecture

§ Naked objects

§ Model-View-Controller

1. Layers

§ In object-oriented design, a layer is a group of classes that have the same set of link-time module dependencies to other modules. In other words, a layer is a group of reusable components that are reusable in similar circumstances.

§ Layers are often arranged in a tree-form hierarchy, with dependency relationships as links between the layers. Dependency relationships between layers are often either inheritance, composition or aggregation relationships, but other kinds of dependencies can also be used.

2. Presentation-abstraction-control

Presentation-abstraction-control (PAC) is a software architectural pattern, somewhat similar to model-view-controller (MVC). PAC is used as a hierarchical structure of agents, each consisting of a triad of presentation, abstraction and control parts. The agents (or triads) communicate with each other only through the control part of each triad. It also differs from MVC in that within each triad, it completely insulates the presentation (view in MVC) and the abstraction (model in MVC), this provides the option to separately multithread the model and view which can give the user experience of very short program start times, as the user interface (presentation) can be shown before the abstraction has fully initialized.

3. Three-tier

Three-tier is a client-server architecture in which the user interface, functional process logic ("business rules"),computer data storage and data access are developed and maintained as independent modules, most often on separate platforms.

The three-tier model is considered to be a software architecture and a software design pattern.

Apart from the usual advantages of modular software with well defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently as requirements or technology change. For example, a change of operating system in the presentation tier would only affect the user interface code.

Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe contains the computer data storage logic. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture").

The 3-Tier architecture has the following three tiers:

Presentation Tier

This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network.

Application Tier (Business Logic/Logic Tier)

The logic tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.

Data Tier

This tier consists of Database Servers. Here information is stored and retrieved. This tier keeps data neutral and independent from application servers or business logic. Giving data its own tier also improves scalability and performance.

Comparison with the MVC architecture

At first glance, the three tiers may seem similar to the MVC (Model View Controller) concept; however, topologically they are different. A fundamental rule in a three-tier architecture is the client tier never communicates directly with the data tier; in a three-tier model all communication must pass through the middleware tier. Conceptually the three-tier architecture is linear. However, the MVC architecture is triangular: the View sends updates to the Controller, the Controller updates the Model, and the View gets updated directly from the Model.

From a historical perspective the three-tier architecture concept emerged in the 1990s from observations of distributed systems (e.g., web applications) where the client, middleware and data tiers ran on physically separate platforms.

Web Development usage

In the Web development field, three-tier is often used to refer to Websites, commonly Electronic commerce websites, which are built using three tiers:

1. A front end Web server serving static content.

2. A middle dynamic content processing and generation level Application server, for example Java EE, ASP.net, PHP platform.

3. A back end Database, comprising both data sets and the Database management system or RDBMS software that manages and provides access to the data.

4. Pipeline

Pipelines are often implemented in a multitasking OS, by launching all elements at the same time as processes, and automatically servicing the data read requests by each process with the data written by the upstream process. In this way, the CPU will be naturally switched among the processes by the scheduler so as to minimize its idle time. In other common models, elements are implemented as lightweight threads or as coroutines to reduce the OS overhead often involved with processes. Depending upon the OS, threads may be scheduled directly by the OS or by a thread manager. Coroutines are always scheduled by a coroutine manager of some form.

Usually, read and write requests are blocking operations, which means that the execution of the source process, upon writing, is suspended until all data could be written to the destination process, and, likewise, the execution of the destination process, upon reading, is suspended until at least some of the requested data could be obtained from the source process. Obviously, this cannot lead to a deadlock, where both processes would wait indefinitely for each other to respond, since at least one of the two processes will soon thereafter have its request serviced by the operating system, and continue to run.

For performance, most operating systems implementing pipes use pipe buffers, which allow the source process to provide more data than the destination process is currently able or willing to receive. Under most Unices and Unix-like operating systems, a special command is also available which implements a pipe buffer of potentially much larger and configurable size, typically called "buffer". This command can be useful if the destination process is significantly slower than the source process, but it is anyway desired that the source process can complete its task as soon as possible. E.g., if the source process consists of a command which reads an audio track from a CD and the destination process consists of a command which compresses the waveform audio data to a format like OGG Vorbis. In this case, buffering the entire track in a pipe buffer would allow the CD drive to spin down more quickly, and enable the user to remove the CD from the drive before the encoding process has finished.

Such a buffer command can be implemented using available operating system primitives for reading and writing data. Wasteful busy waiting can be avoided by using facilities such as pollor select or multithreading.

VM/CMS and MVS

CMS Pipelines is a port of the pipeline idea to VM/CMS and MVS systems. It supports much more complex pipeline structures than Unix shells, with steps taking multiple input streams and producing multiple output streams. (Such functionality is supported by the Unix kernel, but few programs use it as it makes for complicated syntax and blocking modes, although some shells do support it via arbitrary file descriptor assignment.[citation needed]) Due to the different nature of IBM mainframe operating systems, it implements many steps inside CMS Pipelines which in Unix are separate external programs, but can also call separate external programs for their functionality. Also, due to the record-oriented nature of files on IBM mainframes, pipelines operate in a record-oriented, rather than stream-oriented manner.

Pseudo-pipelines

On single-tasking operating systems, the processes of a pipeline have to be executed one by one in sequential order; thus the output of each process must be saved to a temporary file, which is then read by the next process. Since there is no parallelism or CPU switching, this version is called a "pseudo-pipeline".

For example, the command line interpreter of MS-DOS ('COMMAND.COM') provides pseudo-pipelines with a syntax superficially similar to that of Unix pipelines. The command "dir | sort | more" would have been executed like this (albeit with more complicated temporary file names):

1. Create temporary file 1.tmp

2. Run command "dir", redirecting its output to 1.tmp

3. Create temporary file 2.tmp

4. Run command "sort", redirecting its input to 1.tmp and its output to 2.tmp

5. Run command "more", redirecting its input to 2.tmp, and presenting its output to the user

6. Delete 1.tmp and 2.tmp, which are no longer needed

7. Return to the command prompt

All temporary files are stored in the directory pointed to by %TEMP%, or the current directory if %TEMP% isn't set.

Thus, pseudo-pipes acted like true pipes with a pipe buffer of unlimited size (disk space limitations notwithstanding), with the significant restriction that a receiving process could not read any data from the pipe buffer until the sending process finished completely. Besides causing disk traffic, if one doesn't install a harddisk cache such as SMARTDRV, that would have been unnecessary under multi-tasking operating systems, this implementation also made pipes unsuitable for applications requiring real-time response, like, for example, interactive purposes (where the user enters commands that the first process in the pipeline receives via stdin, and the last process in the pipeline presents its output to the user via stdout).

Also, commands that produce a potentially infinite amount of output, such as the yes command, cannot be used in a pseudo-pipeline, since they would run until the temporary disk space is exhausted, so the following processes in the pipeline could not even start to run.

Object pipelines

Beside byte stream-based pipelines, there are also object pipelines. In an object pipeline, the processed output objects instead of texts; therefore removing the string parsing tasks that are common in UNIX shell scripts. Windows PowerShell uses this scheme and transfers .NET objects. Channels, found in the Limbo programming language, are another example of this metaphor.

Pipelines in GUIs

Graphical environments such as RISC OS and ROX Desktop also make use of pipelines. Rather than providing a save dialog box containing a file manager to let the user specify where a program should write data, RISC OS and ROX provide a save dialog box containing an icon (and a field to specify the name). The destination is specified by dragging and dropping the icon. The user can drop the icon anywhere an already-saved file could be dropped, including onto icons of other programs. If the icon is dropped onto a program's icon, it's loaded and the contents that would otherwise have been saved are passed in on the new program's standard input stream.

For instance, a user browsing the world-wide web might come across a .gz compressed image which they want to edit and re-upload. Using GUI pipelines, they could drag the link to their de-archiving program, drag the icon representing the extracted contents to their image editor, edit it, open the save as dialog, and drag its icon to their uploading software.

Conceptually, this method could be used with a conventional save dialog box, but this would require the user's programs to have an obvious and easily-accessible location in the filesystem that can be navigated to. In practice, this is often not the case, so GUI pipelines are rare.

5. Implicit invocation

Implicit invocation is a term used by some authors for a style of software architecture in which a system is structured around event handling, using a form of callback. It is closely related to Inversion of control and what is known informally as the Hollywood Principle.

6. Blackboard system

A blackboard system is an artificial intelligence application based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems.

7. Peer-to-peer

A peer-to-peer (or P2P) computer network uses diverse connectivity between participants in a network and the cumulative bandwidth of network participants rather than conventional centralized resources where a relatively low number of servers provide the core value to a service or application. P2P networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files (see file sharing) containing audio, video, data or anything in digital format is very common, and real time data, such as telephony traffic, is also passed using P2P technology.

A pure P2P network does not have the notion of clients or servers but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example of a file transfer that is not P2P is an FTP server where the client and server programs are quite distinct: the clients initiate the download/uploads, and the servers react to and satisfy these requests.

In contrast to the above discussed pure P2P network, an example of a distributed discussion system that also adopts a client-server model is the Usenet news server system, in which news servers communicate with one another to propagate Usenet news articles over the entire Usenet network. Particularly in the earlier days of Usenet, UUCP was used to extend even beyond the Internet. However, the news server system acted in a client-server form when individual users accessed a local news server to read and post articles. The same consideration applies to SMTP email in the sense that the core email relaying network of Mail transfer agents follows a P2P model while the periphery of e-mail clients and their direct connections is client-server. Tim Berners-Lee's vision for the World Wide Web, as evidenced by his WorldWideWeb editor/browser, was close to a P2P network in that it assumed each user of the web would be an active editor and contributor creating and linking content to form an interlinked "web" of links. This contrasts to the more broadcasting-like structure of the web as it has developed over the years.

Some networks and channels such as Napster, OpenNAP and IRC serving channels use a client-server structure for some tasks (e.g. searching) and a P2P structure for others. Networks such as Gnutella or Freenet use a P2P structure for all purposes, and are sometimes referred to as true P2P networks, although Gnutella is greatly facilitated by directory servers that inform peers of the network addresses of other peers.

P2P architecture embodies one of the key technical concepts of the Internet, described in the first Internet Request for Comments, RFC 1, "Host Software" dated April 7, 1969. More recently, the concept has achieved recognition in the general public in the context of the absence of central indexing servers in architectures used for exchanging multimedia files.

The concept of P2P is increasingly evolving to an expanded usage as the relational dynamic active in distributed networks, i.e. not just computer to computer, but human to human. Yochai Benkler has coined the term commons-based peer production to denote collaborative projects such as free software. Associated with peer production are the concept of peer governance (referring to the manner in which peer production projects are managed) and peer property (referring to the new type of licenses which recognize individual authorship but not exclusive property rights, such as the GNU General Public License and the Creative licenses).

8. Service-oriented architecture

In computing, service-oriented architecture (SOA) provides methods for systems development and integration where systems package functionality as interoperable services. A SOA infrastructure allows different applications to exchange data with one another. Service-orientation aims at a loose coupling of services with operating systems, programming languages and other technologies that underlie applications. SOA separates functions into distinct units, or services, which developers make accessible over a network in order that users can combine and reuse them in the production of applications. These services communicate with each other by passing data from one service to another, or by coordinating an activity between two or more services.

9. Naked objects

The naked objects pattern is defined by three principles:

1. All business logic should be encapsulated onto the domain objects. This principle is not unique to naked objects: it is just a strong commitment to encapsulation.

2. The user interface should be a direct representation of the domain objects, with all user actions consisting, explicitly, of creating or retrieving domain objects and/or invoking methods on those objects. This principle is also not unique to naked objects: it is just a specific interpretation of an object-oriented user interface (OOUI).

The original idea in the naked objects pattern arises from the combination of these two, to form the third principle:

3. The user interface should be created 100% automatically from the definition of the domain objects. This may be done using several different technologies, including source code generation; implementations of the naked objects pattern to date have favoured the technology of reflection.

The naked objects pattern was first described formally in Richard Pawson's PhD thesis[1] which includes a thorough investigation of various antecedents and inspirations for the pattern including, for example, the Morphic user interface.

Naked Objects is commonly contrasted with the model-view-controller pattern. However, the published version of Pawson's thesis (see References) contains a foreword by Trygve Reenskaug, the inventor of the model-view-controller pattern, suggesting that naked objects is closer to the original intent of model-view-controller than many of the subsequent interpretations and implementations.

Benefits

Pawson's thesis claims four benefits for the pattern:

A faster development cycle, because there are fewer layers to develop. In a more conventional design, the developer must define and implement three or more separate layers: the domain object layer, the presentation layer, and the task or process scripts that connect the two. (If the naked objects pattern is combined with object-relational mapping or an object database, then it is possible to create all layers of the system from the domain object definitions alone; however, this does not form part of the naked objects pattern per se.) The thesis includes a case study comparing two different implementations of the same application: one based on a conventional '4-layer' implementation; the other using naked objects.

Greater agility, referring to the ease with which an application may be altered to accommodate future changes in business requirements. In part this arises from the reduction in the number of developed layers that must be kept in synchronization. However the claim is also made that the enforced 1:1 correspondence between the user presentation and the domain model, forces higher-quality object modeling, which in turn improves the agility.

A more empowering style of user interface. This benefit is really attributable to the resulting object-oriented user interface (OOUI), rather than to naked objects per se, although the argument is made that naked objects makes it much easier to conceive and to implement an OOUI.

Easier requirements analysis. The argument here is that with the naked objects pattern, the domain objects form a common language between users and developers and that this common language facilitates the process of discussing requirements - because there are no other representations to discuss. Combined with the faster development cycle, it becomes possible to prototype functional applications in real time.

10. Model-View-Controller

Model–view–controller (MVC) is an architectural pattern used in software engineering. Successful use of the pattern isolates business logic from user interface considerations, resulting in an application where it is easier to modify either the visual appearance of the application or the underlying business rules without affecting the other. In MVC, the model represents the information (the data) of the application; the view corresponds to elements of the user interface such as text, checkbox items, and so forth; and the controller manages the communication of data and the business rules used to manipulate the data to and from the model.

Monday, April 27, 2009

Using Group Policy in Windows (type gpedit.msc from run)

image

The Windows Operating Systems provide a centralized management and configuration solution called Group Policy. Group Policy is supported on Windows 2000, Windows XP Professional, Windows Vista, Windows Server 2003 and Windows Server 2008. Windows XP Media Center Edition and Windows XP Professional computers not joined to a domain can also use the Group Policy Object Editor to change the group policy for the individual computer. This local group policy however is much more limited than GPOs for Active Directory. Windows Home does not support Group Policy since it has no functionality to connect to a domain.

Usually Group Policy is used in an Enterprise type environment but it can be used in schools, small businesses, and other organizations as well. Group Policy can control a systems registry, NTFS security, audit and security policy, software installation, logon/logoff scripts, folder redirection, and Internet Explorer settings. For example, you can use it to restrict certain actions that pose a security risk like blocking the Task Manager, restricting access to certain folders, disabling downloaded executable files, etc.

Group Policy has both Active Directory and Local Computer Policy feasibility. Local Group Policy (LGP) using GPEDIT is a more basic version of the group policy used by Active Directory. In versions of Windows before Vista, LGP can configure the group policy for a single local computer, but unlike Active Directory group policy, can not make policies for individual users or groups. Windows Vista supports Multiple Local Group Policy Objects which allows setting local group policy for individual users. Windows Vista provides this ability with three layers of Local Group Policy objects: Local Group Policy, Administrator and Non-Administrators Group Policy, and user specific Local Group Policy. These layers of Local Group Policy objects are processed in order, starting with Local Group Policy, continuing with Administrators and Non-Administrators Group Policy, and finishing with user-specific Local Group Policy.

Primarily you see Group Policy used in an Active Directory solutions. Policy settings are actually stored in what are called Group Policy Objects (GPOs) and is internally referenced by a Globally Unique Identifier (GUID) which may be linked to multiple domains or organizational units. In this way, potentially thousands of machines or users can be updated via a simple change to a single GPO which can reduce administrative burden and costs associated with managing these resources.

Group Policies are analyzed and applied at startup for computers and during logon for users. The client machine refreshes most of the Group Policy settings periodically, the period ranging from 60-120 minutes and controlled by a configurable parameter of the Group Policy settings.

  • Configuring Group Policy Settings

    Group Policy Object Editor (GPEDIT) is the main application that is used to administer Group Policies. GPEDIT consists of two main sections: User Configuration and Computer Configuration. The User Configuration holds settings that are applied to users (at logon and periodic background refresh) while the Computer Configuration holds settings that are applied to computers (at startup and periodic background refresh). These sections are further divided into the different types of policies that can be set, such as Administrative Templates, Security, or Folder Redirection.

    Group Policy settings are configured by navigating to the appropriate location in each section. For example, you can set an Administrative Templates policy setting in a GPO to prevent users from seeing the Run command. To do this you would enable the policy setting Remove Run Menu from Start Menu. This setting is located under User Configuration, Administrative Templates, Start Menu, and Task Bar. You edit most policy settings by double-clicking the title of the policy setting, which opens a dialog box that provides specific options. In Administrative Templates policy settings, for example, you can choose to enable or disable the policy setting or leave it as not configured. In other areas, such as Security Settings, you can select a check box to define a policy setting and then set available parameters.

    The Group Policy Object Editor (GPEDIT) provides different ways of learning about the function or definition of specific policy settings. In most cases, when you can double click the title of a policy setting, the dialog box contains any relevant defining information about the policy setting. For Administrative Templates policy settings, the Group Policy Object Editor provides explanation text directly in the Web view of the console. You also can find this explanation text by double-clicking the policy setting and then clicking the Explain text tab. In either case, this text shows operating system requirements, defines the policy setting, and includes any specific details about the effect of enabling or disabling the policy setting.

     
  • Using Local Policy to Turn Off Windows Features

    Windows has a lot of features but you may not want all the features to be enable for all users. For example, the "Auto play" feature on the CD-ROM drives might be a setting you like to have turned off. Starting the policy edit is quite simple.

1. Click start and then run.

2. Type gpedit.msc and press enter.

3. The policy editor will start.

It should say in the top left corner "local computer policy". Make sure you take plenty of time to familiarize yourself with GPEDIT before you attempt any changes and be careful when you are setting options. You should read the help and understand each setting before you change it. Take the time to browse through all the main sections: "Computer Configuration" and "User Configuration". In both sections you will find the same subsections, some of which you do not need to touch. The one you will be most interested in for both User and Computer configuration is the section marked "Administrative Templates".

There are usually three settings for each policy:

1. Not configured - This is the default setting that means the policy is not over riding any configuration changes that have been made on the machine by the user. If you do not want to specify a certain setting, then the setting should be left with this option enabled.

2. Enabled - This means that the particular setting or option is set. For example "Enabled" against "Auto Play is disabled" means that Auto Play is disabled.

3. Disabled - This is the opposite of enabled and usually means you have turned off access to a feature that would normally be accessible.

There will be exceptions to some settings, where you are asked to actually enter text or choose from a list. Sometimes after you enable a setting there will be additional options you need to select.

For Windows 2000, you can see the policy explanation of what each change will do by right clicking the setting and choosing properties. The "explain" tab will give you the information. For Windows XP, select the "Extended" tab at the bottom of the Policy Editor window. It is also available from properties as per Windows 2000.

  • Policy Changes In Action

    Many of the changes you make will take affect immediately after your computer applies the setting and the desktop can refresh. Other changes might not take complete effect until after your system has been completely restarted. You may want to always reboot your system after making the changes. No matter what make sure the change is what you want to happen otherwise you could accidently lock yourself out of something.

Policy Highlights -

Here are a couple of changes to the policy that you might want to consider making.

A) Set Internet Explorer Homepage. Stop your home page being changed. It is changed back each time you login. Will affect all users of your machine.
---- User Configuration: Windows Settings: Internet Explorer Maintenance: URLs: Home Page

B) Disable Auto Play. Turn off auto play of new CD-ROMs and music CDs:
---- User Configuration: Administrative Templates: System: Disable Auto Play
---- Computer Configuration: Administrative Templates: System: Disable Auto Play

C) Turn Off Personalised Menus. Does the start menu annoy you by not showing everything? Turn off personalised menus for all users by enabling this setting.
---- User Configuration: Administrative Templates: Windows Components: Start Menu and Task Bar: Disable Personalised Menus