Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Casetools complete notes, Study notes of Computer Applications

hello - hello

Typology: Study notes

2014/2015

Uploaded on 08/11/2015

mohana_priya
mohana_priya 🇮🇳

4.1

(10)

1 document

1 / 204

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
UNIT-I
Data Modeling: Business Growth-Organizational Model-Case Study of student MIS-
What is the purpose of such Models-Understanding the business-Types of models-model
development approach-the case for structural development-advantages of using a case tool.
System analysis and design-what is DFD-General Rules for Drawing DFD-Difference Between
Logical data flow diagram and Physical data flow diagram-Software verses Information
Engineering-How case tools store information.
DATA MODELING:
All business run along fairly pre-determined lines. Authority is handed to people, areas of
responsibility and accountability are marked out and business transactions are processed to bring
in profits.
When a new business comes into existence, the top management will take over the
responsibility of designing the detailed daily, hourly schedule. These will be implemented and
checked for bugs.
The three levels of data modeling, conceptual data model, logical data model, and physical
data model, were discussed in prior sections. Here we compare these three types of data models.
The table below compares the different features:
Feature Conceptual Logical Physical
Entity Names
Entity Relationships
Attributes
Primary Keys
Foreign Keys
Table Names
Column Names
Column Data Types
Conceptual Model Design
Logical Model Design
Physical Model Design
Below we show the conceptual, logical, and physical versions of a single data model.We
can see that the complexity increases from conceptual to logical to physical. This is why we
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Casetools complete notes and more Study notes Computer Applications in PDF only on Docsity!

UNIT-I

Data Modeling : Business Growth-Organizational Model-Case Study of student MIS- What is the purpose of such Models-Understanding the business- Types of models -model development approach-the case for structural development-advantages of using a case tool. System analysis and design -what is DFD-General Rules for Drawing DFD-Difference Between Logical data flow diagram and Physical data flow diagram-Software verses Information Engineering-How case tools store information.

DATA MODELING:

All business run along fairly pre-determined lines. Authority is handed to people, areas of responsibility and accountability are marked out and business transactions are processed to bring in profits.

When a new business comes into existence, the top management will take over the responsibility of designing the detailed daily, hourly schedule. These will be implemented and checked for bugs.

The three levels of data modeling, conceptual data model^ ,^ logical data model, and^ physical data model, were discussed in prior sections. Here we compare these three types of data models. The table below compares the different features:

Feature Conceptual Logical Physical

Entity Names ✓ ✓

Entity Relationships ✓ ✓

Attributes ✓

Primary Keys ✓ ✓

Foreign Keys ✓ ✓

Table Names ✓

Column Names ✓

Column Data Types ✓

Conceptual Model Design

Logical Model Design

Physical Model Design

Below we show the conceptual, logical, and physical versions of a single data model.We can see that the complexity increases from conceptual to logical to physical. This is why we

always first start with the conceptual data model (so we understand at high level what are the different entities in our data and how they relate to one another), then move on to the logical data model (so we understand the details of our data without worrying about how they will actually implemented), and finally the physical data model (so we know exactly how to implement our data model in the database of choice). In a data warehousing project, sometimes the conceptual data model and the logical data model are considered as a single deliverable.

Business Growth

The two main factors that result in the growth of a business are

  • Man power
  • Fund flow

This will clearly indicates the manner in which authority flows in the organization. There will be responsible heads in the organization, who will communicate between various levels in

the organization, and pass accurate information to senior management personnel.

The person who delegates the line of authority will have to impose various checks and controls on the method used for information gathering and assimilation. Thus, slowly a pyramid structure of the organization, which consists of various management levels, comes into focus.

Fund flow in an organization will increases with the growth of business. This could mean that the suppliers, who are supplying raw material to the company, increase in number. The purchasers for the finished product also increase. Greater finance control has to be implemented in the organization.

FINISHED PRODUCT INCREASE

The person who started the business will move away from the day-to-day decisions. These operations are carried out using the checks and controls set by the upper management and the board of directors.

As the business grows is not possible for a single person to manage it efficiently. This is because the time required to process data into vital information in a manual system, begins to

consume greater amount of time.

It should be a requirement of the job that business analysts document process AND data requirements… Process create, read, update and delete data – they manipulate data. Process that aren’t manipulating data aren’t doing anything. Processes exist to manipulate data and process simply cannot operate without the right data to manipulate at the right points in the process.

The following diagram illustrates how processes are the pivot around which solutions are specified, designed and implemented: Data models are the pivot around which processes are specified to create, read, update and delete data.

Organizational Models

Generally, organizational models are classified as functional, product based (divisional)

or matrix structure. Functional organizations are traditional business hierarchies in which tasks are grouped by functional area, such as sales, administration, and production, engineering and customer services. Functional models are most effective in a business where routine processes are performed time or where there are just a few products or service lines.

For example, an accounting firm catering to businesses with fewer than 50 employees would have a functional model. In divisional models, employees are grouped by a common factor such as product, location or customer population. For example, a cleaning service may have separate divisions for office, restaurant and school-cleaning services. The matrix organization combines both functional and product model elements, using cross-functional teams in which employees work on projects and report to both a functional manager and a project manager.

There are six types of organization models they are as follows.

Model 1 is the classical bureaucracy, carefully blueprinted into functional departments, run from the top by the chief executive through various structures, rules, regulations, job descriptions and controls. It is designed to work like a machine, and operates very efficiently - so long as nothing changes!

Bureaucracies, like machines, operate well when there are stable functions to be performed, especially when they can be broken down into a series of separate operations coordinated from the top. But when an organization's tasks keep changing, it's a different story. The changes create a host of problems that no one is mandated to solve.

The problems thus work their way up the hierarchy, and eventually fall on the chief- executive's desk. He or she soon gets overloaded, and initiates a shift to Model 2 by appointing a top management team. Collectively, they now deal with the problems, leaving the bureaucratic machine below (ie. the functional departments) to continue with the routine work.

Model 2 works reasonably well for dealing with moderate amounts of change. But if the pace heats up, the top team gets overloaded, with a host of operational and strategic decisions demanding attention at team meetings. Gradually, or as a result of a specific organizational redesign, Model 2 thus gets pushed towards Model 3. Interdepartmental committees or project- teams are established within the body of the organization itself. The idea is that routine work will still be conducted through departmental hierarchies, with special problems or projects being delegated to the project-teams for investigation and the development of appropriate action plans.

This initiative, often heralded as a move to a "project organization," makes life bearable at the top again, since a lot of work can now be delegated. But because the teams are set within the context of a bureaucratic structure, they often fail to take off. There are a lot of projects and a lot of meetings. But there are also a lot of spinning wheels. The team meetings, as in Stereotype, become ritualistic.

Team members are usually representatives of their departments. As such, they have dual loyalties - to their departmental bosses, and to their team. But since real power over day-to-day activities and career progress rests with the departmental heads, the teams themselves do not develop any real clout. Members usually "sit in" on team department's point of view. If problems arise in the meeting, decisions are usually delayed until representatives have had a chance to "report back" and test departmental reactions. If the issue is truly controversial, it ends up getting passed to the top team, so that departmental heads can resolve it for themselves.

Model 3 is thus an organization characterized by pseudo teams that are only capable of dealing with relatively minor issues. In effect, Model 2 still rules.

All three of these models are evident in Stereotype, which, in effect has shifted through Models 1, 2 and 3. It has many of the problems described above. The powerlessness and cynical culture that has developed in the project teams is generic - shared by countless other organizations caught in the same bureaucratic trap. The structure of the organization has changed, but the culture and politics are still firmly set in the old mould.

Organizations can often make successful transitions from Models 1 or 2, to Model 3. But Model 3 is only effective when the issues delegated to the teams are small in number, require consultation rather than action, and allow generous time-frames for producing results. We are back to the contingency view of organization and management discussed in relation to Stereotype. To be effective, organizations need to structure themselves through models that are appropriate for dealing with the external challenges being faced. If the quest, as in Stereotype, is to create an organizational structure that is driven and enlivened from the middle by flexible, aggressive, innovative teams, the results of Model 3 are almost always disappointing.

To achieve the flexible, innovative, committed organization that is needed to deal with the turbulence and change found in the modern environment, organizations have to get beyond Model 3. This is where Models 4, 5 and 6, come in, especially Models 5 and 6

Model 4 the matrix organization, is a hybrid bureaucratic form. It's special character rests in the fact that it has decided to give more or less equal priority to functional departments such as finance, administration, marketing, sales, production, and R & D, (the columns of the matrix) and to various business or product areas (the rows). Thus people working in various product or business teams within the organization have a dual focus. They must work with two perspectives in mind, combining functional skills and resources with an orientation driven by the key tasks they have been assigned.

The dual orientation means that bureaucratic power typical of Models 1, 2 and 3 is diluted, since the heads of major projects, or groups of projects can be as important and as powerful as heads of traditional functional departments. In this way, members of project-teams are not necessarily pulled back into the traditional lines of responsibility.

Since project heads may have a large influence on rewards and future career paths, real team commitment can develop. In successful examples of Model 4, the project-teams become the driving force behind innovation, providing an ability for the organization to change and adapt along with challenges emerging from the environment.

The same is true of Model 5. This model, typical of small and medium-sized organizations that are highly innovative, is built around teams. The influence of functional departments is minimized. People are appointed to work on specific projects.

One or two projects may command most of a person's energy at a particular time, but he or she may also be contributing to others. As the work on one project-team winds down, commitments on other teams increase. Career progress in this type of organization rests in moving from one project to another.

This kind of organization is ideally suited for dealing with the challenges of rapid change. Unlike the matrix of Model 4, it does not have a heavy functional structure to carry along. It's focus is teamwork, innovation, and successful initiatives, completed in a profitable, timely fashion. Functional departments, insofar as they exist, are support departments, committed to enhancing the work of the teams, who are their clients.

The whole operation is controlled by the management team at the centre. It focuses on strategic thrust, defining operational parameters, marshalling and channelling resources, monitoring results, and facilitating the general management of the system as a whole. The teams may be managed through "umbilical cords" characteristic of the spider plant model

The organization is much more like a fluid network of interaction than a bureaucratic structure. The teams are powerful, exciting and dynamic entities. There is frequent cross- fertilization of ideas, and a regular exchange of information, especially between team leaders and

Cost of Conversion

Advertisement

Follow up cost

Telephone cost

Telegram cost

Letters

Hand delivered

Posted

Courier

Seminars conducted

Travel + Miscellaneous Expenses incurred during letter delivered by hand

In the above manner, the Cost of Conversion block is decomposed into various sub-heads. Thus these are identified as given below:-

Other sub-heads of the system would be

  • Hand out material of students
  • Number of trainers required to balance the teaching load
  • Library control system
  • Examination and evaluation methods
  • Fees paid by the students.
  • In the above manner a sub-head is decomposed to its atomic level and sub-head is identified.

The breakup up to the atomic level of a few blocks is given in diagram STUDENT M.I.S.

Financial Requirements

The rules to find out the financial requirements are as follows:

  • Type of Envelope
  • Quality of Paper + weight of paper
  • Weight of the Envelope
  • Printing cost for each letter
  • Cost of Postage

Instead of writing a program where all these atomic level blocks are kept at a fixed value, try and make them flexible by designing a separate Data Entry Screen for each such block.

A separate Data Entry Screen for Envelopes which take into consideration all the factors of quality, quantity, weight, purpose, so on and so forth, from each atomic level decomposition the costing for different types of mailing can be accurately pinpointed and the cost of conversion can be accurately predicted.

This Business Model is then to be converted into a structured design by Information Technology professionals.

The Method Used:- Top/Down

Focus:- For each Block being analyzed

  • The material from which the manual system gets its INPUT.
  • The format in which this material is OUTPUT.
  • The process used to effect this transition.

Samples of each type of INPUT are collected. This forms the data entry screen(shape) and the screen data pickup field design.

The specific type of processing of the data loaded into data entry screen fields is achieved by writing a command file (programs) in a specific language.

Example for student MIS using Rationale IBM

Case Study: STUDENT INFORMATION SYSTEM

Problem statement:-

To find the information regarding students studying in a particular institute which include attendence and marks

Disadvantages of Hard coding atomic block values into a program

A business model will be decomposed into various atomic levels. If you hard code any of the atomic levels, later it becomes impossible to implement changes when it takes place. This is

Customer Record = Customer Name + Customer Address

+payment-information + outstanding -orders

+customer type

Improving System Quality

The models can improve the quality of systems analysis and design. By visually reviewing

the major functions of the system,the developer can easily identify missing components and organize system structure.

Human Resources Data management system model it may be recognized that there is the facility of Hiring and terminating personnel, but there is no facility for transferring personnel

from one department to another.

TYPES OF MODELS

Business Models and Information System Models

  • The two basic types of models are business models and information system models. Business models represent the business itself, the information it tracks and the way in which it will be used. These models may include both manual and automatic functions.
  • The information system models, document the automation of business functions, the actual storage of business data and the automated ways in which the information is accessed.
  • (^) Both types of models will graphically represent the interaction between data and processes that have to run to transform such data into information.
  • There are models developed for several layers, these layers are Conceptual, Logical and Physical. The information contained in the model may vary depending upon the layer at which the model is created.

MODEL DEVELOPMENT APPROACH

A common approach in the past for information professional was to develop models on

their own as part of an information system development project. The model developer draws on his or her own knowledge of the business or systems, as the foundation of the model.

The business may provide assistance specific questions about the model itself, but would not be directly involved in the model development activities.

JAD Sessions

Increasingly, the business community has played an active role in the development of

these models. JOINT APPLICATION DEVELOPMENT (JAD) sessions are working sessions between Information System professionals and the business community , that results in models. While these sessions are in force, someone documents what goes on. It is on the basis of this documentation that a model is created by the group.

JAD (Joint Application Development) is a methodology that involves the client or end user in the design and development of an application, through a succession of collaborative workshops called JAD sessions. Chuck Morris and Tony Crawford, both of IBM, developed JAD in the late 1970s and began teaching the approach through workshops in 1980.

The JAD approach, in comparison with the more traditional practice, is thought to lead to faster development times and greater client satisfaction, because the client is involved throughout the development process. In comparison, in the traditional approach to systems development, the developer investigates the system requirements and develops an application, with client input consisting of a series of interviews.

A variation on JAD, rapid application development (RAD) creates an application more quickly through such strategies as using fewer formal methodologies and reusing software components.

Using CASE Tools to support Model development

What is CASE TOOL?

Although modelling has been in existence for many years in the Business Information

area, it has taken the development of special software that works on micros, minis and mainframes, called Computer Aided Software Engineering, to cause Information Systems professionals to use modelling in its creation.

THE CASE FOR STRUCTURED DEVELOPMENT

Software development is an art and a science, say, proponents of Computer –Aided Systems Engineering (CASE).

CASE technology is the automation of step-by-step methodologies for software and system development from step-one planning to ongoing maintenance. It is designed to automate the drudgery of development and free the developer to solve problems.

Long used by mainframe applications developers, CASE technology is catching on among PC developers as companies are faced with backlogs in software development and antiquated system that need updating and as the PC and local area network (LAN) application grows more complex.

CASE tools help the Information Systems professional in the analysis, design and construction of a supporting Information System.

The specific purpose of this software tool is to automate the manual system used in the software design life cycle and to improve the quality of resulting application.

In general the Software design life cycle is as follows:

  • Planning: Gathering information about user problems and requirements setting goals and criteria, generating alternative solutions.
  • Analysis: Determine user needs and system constraints; testing alternatives solutions against requirements and constraints, generating a functional specification and a logical model for the best solution.
  • Design: Detailing the design for a selected solution, including diagrams relating for all problems, subroutines and data flow.

A set model development rules are embedded into most CASE tools. These rules are automatic and ensure that logical errors in systems design are trapped and corrected immediately.

CASING THE MARKET

CASE tools have one of the highest growth rates compared to any segment in the computer industry. Many companies begin buying multiple copies of tools, in effect adopting CASE technology as their software productivity strategy.

Success with CASE will most likely occur when developers and managers choose tools based on methodologies similar to those already in place within the organization. Many CASE tools are micro computer based, use powerful graphics to enhance the user interface, and can be integrated with other CASE and non-CASE tools.

SYSTEM ANALYSIS AND DESIGN

In Business today, there are many specific manual tasks that require men, machines and

methods to work in harmony to produce useful output from the manual task. Manual task take time to finish, hence useful output is often not readily available when required.

The application of the speed and power of a computer, to the manual task sometimes accomplishes miracles when viewed from the point of view of the speed at which useful output can be made available to the user.

Analysis of the manual system is required so that we can understand how the system functions; we need to understand what are its inputs, what kind of processing takes place, and what kind of output is produced. A very clear understanding of the manual processing is very crucial.

System analysis and design in the early days was done based on the programmers personal skills, this varied widely from person to person. Hence analysis was fairly unstructured and thus unfocused. This gave rise to a sort of trial and error method of analysis. Each time analysis being done differently based on the experience currently gained.

As it gave rise to many man hours wasted and was a poorly designed system new methods were discovered.

The computer’s power itself was harnessed to create diagrams and charts. They would be drawn and redrawn rapidly until the manual system was done. Finally systems printed out these diagrams which were filed. Thus systems were clearly and logically documented for use at a later date.

STRUCTURED SYSTEM ANALYSIS AND DESIGN

Structured methodologies help to standardize software development and maintenance by approaching them as engineering discipline rather than as whims of individual software developers.

  • Structuring produces clear, fast-to-write, and easy-to-maintain programs using diagrams to show the flow of data and relationships between individual modules.
  • The biggest drawbacks of the structured methodologies are their diagramming techniques are manual, slow and tedious.
  • To have intelligence built into the software tool, it is required that several rules covering the drawing of data flow diagrams drawn have to be formalized, developed and checked for accuracy. After this was done these rules were embedded into the way the tool worked.
  • Since it was a toolbox with a set of sophisticated tools, which had a good deal of intelligence built in and the tools harnessed to power of the computer itself to design and check out computer systems, the entire tool box began to be called Computer Aided Engineering and CASE tools came into existence.

THE PEOPLE WHO PIONEEERED SSAD:

  • Peter Yourdon
  • Gane & Sarson
  • Jackson Martin
  • Constantine
  • De-Marco
  • Mellor
  • Hately
  • Ward

These were the individuals who defined developed and checked the rules that a were built into the Software Core that went into the CASE tool.

The cycle used by computer professionals to design a Computer System for a client is as follows:

  • Planning: Gathering information about user problems and requirements setting goals and Criteria, Generating alternative solutions.
  • Analysis: Determining user needs and system constraints, testing alternative solutions against requirements and constraints, generating a functional specification and a logical model for the best solutions.
  • Design: Detailing the design for a selected solution, including diagrams relating all programs, subroutines and data flow.
  • Implementation: Building, testing, installing, and tuning software.
  • Maintenance: Outlining and implementing plans for continually tuning, correcting and enhancing systems.

In this Analysis stage, the data gathered is formally entered into the CASE tool as Data Flow Diagrams. The data that has been identified for manipulation is stored in a Data Dictionary.

Hence data flow diagrams show us how data moves in the system, the file design data pickup screens help users key-in data in a predetermined sequence so w know data is to be picked up from the screen.

Hence documentation developed with CASE tools is used as very rigid to the actual writing of code.

This is the paper document that indicates to the systems designer exactly what data has to be stored processed and converted into information.

Process

This is what has to happen to the data so that it can be transformed into (processed into Information) output data.

Symbol that represents Process

Example of the process

A single process can be exploded into several processes that have to execute prior data is converted into information.

Data Store

This is the computer file structure that stores the data that has been processed.

Symbol that represents Data Store

Data stores

Data stores are places where data may be stored. This information may be stored either temporarily or permanently by the user. In any system you will probably need to make some assumptions about which relevant data stores to include. How many data stores you place on a DFD somewhat depends on the case study and how far you go in being specific about the information stored in them. It is important to remember that unless we store information coming into our system it will be lost.

The symbol for a data store is shown in Figure 4 and examples are given in Figure 5.

Examples of possible data stores

As data stores represent a person, place or thing they are named with a noun. Each data store is given a unique identifier D1, D2 D3 etc.

Data flows

The previous three symbols may be interconnected with data flows. These represent the flow of data to or from a process. The symbol is an arrow and next to it a brief description of the data that is represented. There are some interconnections, though, that are not allowed.

These are:

  • Between a data store and another data store

This would imply that one data store could independently decide to send some of information to another data store. In practice this must involve a process.

Between an external entity and a data store

This would mean that an external entity could read or write to the data stores having direct access. Again in practice this must involve a process.

  • Also, it is unusual to show interconnections between external entities. We are not normally concerned with information exchanges between two external entities as they are outside our system and therefore of less interest to us. Figure 6 shows some examples of data flows.

Examples of data flows

Hints on drawing

Let’s look at a DFD and see how the features that have just been described may be used. Figure

7 shows an example DFD.

Here are some key points that apply to all DFDs.

  • All the data flows are labelled and describe the information that is being carried.
  • It tends to make the diagram easier to read if the processes are kept to the middle, the external entities to the left and the data stores appear on the right hand side of the diagram.
  • The process names start with a strong verb
  • Each process has access to all the information it needs. In the example above, process 4 is required to check orders. Although the case study has not been given, it is reasonable to suppose that the process is looking at a customer’s order and checking that any order items correspond to ones that the company sell. In order to do this the process is reading data from the product data store.
  • Each process should have an output. If there is no output then there is no point in having that process. A corollary of this is that there must be at least one input to a process as it cannot produce data but can only convert it form one form to another.
  • Data stores should have at least one data flow reading from them and one data flow writing to them. If the data is never accessed there is a question as to whether it should be
  • Processes: strong verbs
  • dataflow: nouns
  • (^) data stores: nouns
  • external entities: nouns
  • No more than 7 - 9 processes in each DFD.
  • Dataflow must begin, end, or both begin & end with a process.
  • Dataflow must not be split.
  • A process is not an analog of a decision in a systems or programming flowchart. Hence, a dataflow should not be a control signal. Control signals are modeled separately as control flows.
  • Loops are not allowed.
  • A dataflow cannot be an input signal. If such a signal is necessary, then it must be a part

of the description of the process, and such process must be so labeled. Input signals as well as their effect on the behavior of the system are incorporated in the behavioral model (say, state transition graphs) of the information system.

  • Decisions and iterative controls are part of process description rather than dataflow.
  • If an external entity appears more than once on the same DFD, then a diagonal line is added to the north-west corner of the rectangle (representing such entity).
  • Updates to data stores are represented in the textbook as double-ended arrows. This is

not, however, a universal convention. I would rather you did not use this convention since it can be confusing. Writing to a data store implies that you have read such data store (you cannot write without reading). Therefore, data store updates should be denoted by a single-ended arrow from the updating process to the updated data store.

  • Dataflow that carry a whole record between a data store and a process is not labeled in the textbook since there is no ambiguity. This is also not a universal convention. I would rather you labeled such dataflow explicitly.

Conservation Principles

  • Data stores & Dataflow: Data stores cannot create (or destroy) any data. What comes out of a data store therefore must first have got into a data store through a process. Processes: Processes cannot create data out of thin air. Processes can only manipulate data they have received from dataflow. Data outflows from processes therefore must be derivable from the data inflows into such processes.

Leveling Conventions

  • (^) Numbering: The system under study in the context diagram is given number `0'. The processes in the top level DFD are labeled consecutively by natural numbers beginning with 1. When a process is exploded in a lower level DFD, the processes in such lower level DFD are consecutively numbered following the label of such parent process ending with a period or full-stop (for example 1.2, 1.2.3, etc.).

Balancing

The set of DFDs pertaining to a system must be balanced in the sense that corresponding to each dataflow beginning or ending at a process there should be an identical dataflow in the exploded DFD.

Data stores

Data stores may be local to a specific level in the set of DFDs. A data store is used only if it is referenced by more than one process.

External entities

Lower level DFDs cannot introduce new external entities. The context diagram must therefore show all external entities with which the system under study interacts. In order not to clutter higher level DFDs, detailed interactions of processes with external entities are often shown in lower level DFDs but not in the higher level ones. In this case, there will be some dataflow at lower level DFDs that do not appear in the higher level DFDs.

In order to facilitate unambiguous balancing of DFDs, such dataflow are crossed out to indicate that they are not to be considered in balancing. This convention of crossing is quite popular, but this text does not follow it. I would rather you followed this convention.

There are two types of DFD’s known as physical and logical DFD. Though both serve the same purpose of representing data flow, there are some differences between the two that will be discussed in this article.

Any DFD begins with an overview DFD that describes in a nutshell the system to be

designed. A logical data flow diagram, as the name indicates concentrates on the business and tells about the events that take place in a business and the data generated from each such event. A physical DFD, on the other hand is more concerned with how the flow of information is to be represented. It is a usual practice to use DFD’s for representation of logical data flow and processing of data.

However, it is prudent to evolve a logical DFD after first developing a physical DFD that reflects all the persons in the organization performing various operations and how data flows between all these persons.

Maintain Consistency between Processes

When developing DFDs in more detail it is important to maintain consistency between levels. No new inputs or outputs to a process should be introduced at a lower level that was not identified at a higher level. However, within a process, new data flows and data stores may be identified.

Follow Meaningful Levelling Conventions

Levelling refers to the handling of local files (those that are used within a process). The details that pertain only to a single process on a particular level should be held within the process. Data stores and data flows that are relevant only to the inside of a process are concealed until that process is exploded into greater detail.