Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

one step to understand everything easy, Lecture notes of Computer Communication Systems

The evolution of distributed computing from mainframe-based applications to network computing and the challenges faced by software applications running on different hardware platforms, operating systems, and networks. It also explains the advantages of distributed computing over traditional standalone applications and discusses core distributed computing technologies such as Client/Server applications, OMG CORBA, Java RMI, Microsoft COM/DCOM, and MOM. The document concludes by discussing the challenges faced by distributed computing technologies in integrating applications across networks.

Typology: Lecture notes

2022/2023

Available from 03/09/2023

hemagm
hemagm 🇺🇸

2 documents

1 / 11

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
UNIT-1 Evolution of Distributed
Computing:-
In the early years of computing, mainframe-based applications were considered to be the
best-fit solution for executing large-scale data processing applications. With the advent of
personal computers (PCs), the concept of software programs running on standalone
machines became much more popular in terms of the cost of ownership and the ease of
application use. With the number of PC-based application programs running on
independent machines growing, the communications between such application programs
became extremely complex and added a growing challenge in the aspect of application-
toapplication interaction. Lately, network computing gained importance, and enabling
remote procedure calls (RPCs) over a network protocol called Transmission Control
Protocol/Internet Protocol (TCP/IP) turned out to be a widely accepted way for application
software communication. Since then, software applications running on a variety of
hardware platforms, operating systems, and different networks faced some challenges
when required to communicate with each other and share data. This demanding
requirement lead to the concept of distributed computing applications. As a definition,
“Distributing Computing is a type of computing in which different components and objects
comprising an application can be located on different computers connected to a network
distributed computing model that provides an infrastructure enabling invocations of object
functions located anywhere on the network. The objects are transparent to the application
and provide processing power as if they were local to the application calling them.
Importance of Distributed Computing
The distributed computing environment provides many
significant advantages compared to a traditional
standalone application. The following are some of those
key advantages:
Higher performance. Applications can execute in
parallel and distribute the load across multiple servers.
Collaboration. Multiple applications can be connected
through standard distributed computing mechanisms.
Higher reliability and availability. Applications or servers can be clustered in multiple
machines.
Scalability. This can be achieved by deploying these reusable distributed components on
powerful servers.
Extensibility. This can be achieved through dynamic (re)configuration of applications that
are distributed across the network. Higher productivity and lower development cycle time.
By breaking
up large problems into smaller ones, these individual components can be developed by
smaller development teams in isolation.
pf3
pf4
pf5
pf8
pf9
pfa

Partial preview of the text

Download one step to understand everything easy and more Lecture notes Computer Communication Systems in PDF only on Docsity!

UNIT-1 Evolution of Distributed

Computing:-

In the early years of computing, mainframe-based applications were considered to be the best-fit solution for executing large-scale data processing applications. With the advent of personal computers (PCs), the concept of software programs running on standalone machines became much more popular in terms of the cost of ownership and the ease of application use. With the number of PC-based application programs running on independent machines growing, the communications between such application programs became extremely complex and added a growing challenge in the aspect of application- toapplication interaction. Lately, network computing gained importance, and enabling remote procedure calls (RPCs) over a network protocol called Transmission Control Protocol/Internet Protocol (TCP/IP) turned out to be a widely accepted way for application software communication. Since then, software applications running on a variety of hardware platforms, operating systems, and different networks faced some challenges when required to communicate with each other and share data. This demanding requirement lead to the concept of distributed computing applications. As a definition, “Distributing Computing is a type of computing in which different components and objects comprising an application can be located on different computers connected to a network distributed computing model that provides an infrastructure enabling invocations of object functions located anywhere on the network. The objects are transparent to the application and provide processing power as if they were local to the application calling them. Importance of Distributed Computing The distributed computing environment provides many significant advantages compared to a traditional standalone application. The following are some of those key advantages: Higher performance. Applications can execute in parallel and distribute the load across multiple servers. Collaboration. Multiple applications can be connected through standard distributed computing mechanisms. Higher reliability and availability. Applications or servers can be clustered in multiple machines. Scalability. This can be achieved by deploying these reusable distributed components on powerful servers. Extensibility. This can be achieved through dynamic (re)configuration of applications that are distributed across the network. Higher productivity and lower development cycle time. By breaking up large problems into smaller ones, these individual components can be developed by smaller development teams in isolation.

Reuse. The distributed components may perform various services that can potentially be used by multiple client applications. It saves repetitive development effort and improves interoperability between components. Reduced cost. Because this model provides a lot of reuse of once developed components that are accessible over the network, significant cost reductions can be achieved. Distributed computing also has changed the way traditional network programming is done by providing a shareable object like semantics across networks using programming languages like Java, C, and C++. The following sections briefly discuss core distributed computing technologies such as Client/Server applications, OMG CORBA, Java RMI, Microsoft COM/DCOM, and MOM. Client-Server Applications The early years of distributed application architecture were dominated by two-tier business applications. In a two-tier architecture model, the first (upper) tier handles the presentation and business logic of the user application (client), and the second/lower tier handles the application organization and its data storage (server). This approach is commonly called client-server applications architecture. Generally, the server in a client/server application model is a database server that is mainly responsible for the organization and retrieval of data. The application client in this model handles most of the business processing and provides the graphical user interface of the application. It is a very popular design in business applications where the user. interface and business logic are tightly coupled with a database server for handling data retrieval and processing. For example, the client-server model has been widely used in enterprise resource planning (ERP), billing, and Inventory application systems where a number of client business applications residing in multiple desktop systems interact with a central database server. Figure 1.2 shows an architectural model of a typical client server system in which multiple desktop-based business client applications access a central database server. Some of the common limitations of the client-server application model are as follows: ■Complex business processing at the client side demands robust client systems. ■Security is more difficult to implement because the algorithms and logic reside on the client side making it more vulnerable to hacking. ■Increased network bandwidth is needed to accommodate many calls to the server, which can impose scalability restrictions. ■Maintenance and upgrades of client applications are extremely difficult because each client has to be maintained separately. ■Client-server architecture suits mostly database-oriented standalone applications and does not target robust reusable componentoriented applications.

Legacy and custom application integration. Using CORBA IDL, developers can encapsulate existing and custom applications as callable client applications and use them as objects on the ORB. Rich distributed object infrastructure. CORBA offers developers a rich set of distributed object ervices, such as the Lifecycle, Events, Naming, Transactions, and Security services. Location transparency. CORBA provides location transparency: An object reference is independent of the physical location and application level location. This allows developers to create CORBA-based systems where objects can be moved without modifying the underlying applications.

Java RMI

Java RMI was developed by Sun Microsystems as the standard mechanism to enable distributed Java objects-based application development using the Java environment. RMI provides a distributed Java application environment by calling remote Java objects and passing them as arguments or return values. It uses Java object serialization—a lightweight object persistence technique that allows the conversion of objects into streams. Before RMI, the only way to do inter-process communications in the Java platform was to use the standard Java network libraries. Though the java.net APIs provided sophisticated support for network functionalities, they were not intended to support or solve the distributed computing challenges. Java RMI uses Java Remote Method Protocol (JRMP) as the interprocess communication protocol, enabling Java objects living in different Java Virtual Machines (VMs) to transparently invoke one another’s methods. Because these VMs can be running on different computers anywhere on the network, RMI enables object-oriented distributed computing. RMI also uses a reference-counting garbage collection mechanism that keeps track of external live object references to remote objects (live connections) using the virtual machine. When an object is found unreferenced, it is considered to be a weak reference and it will be garbage collected. In RMI-based application architectures, a registry (rmiregistry)-oriented mechanism provides a simple nonpersistent naming lookup service that is used to store the remote object references and to enable lookups from client applications. The RMI infrastructure based on the JRMP acts as the medium between the RMI clients and remote objects. It intercepts client requests, passes invocation arguments, delegates invocation requests to the RMI skeleton, and finally passes the return values of the method execution to the client stub. It also enables callbacks from server objects to client applications so that the asynchronous notifications can be achieved. Figure 1. depicts the architectural model of a Java RMI-based application solution. The Java RMI architecture is composed of the following components: RMI client. The RMI client, which can be a Java applet or a standalone application, performs the remote method invocations on a server object. It can pass arguments that are primitive data types or serializable objects. RMI stub. The RMI stub is the client proxy generated by the rmi compiler ( rmic provided along with Java developer kit—JDK) that encapsulates the network information of the server and performs the delegation of the method invocation to the server. The stub also

marshals the method arguments and unmarshals the return values from the method execution. RMI infrastructure. The RMI infrastructure consists of two layers: the remote reference layer and the transport layer. The remote reference layer separates out the specific remote reference behavior from the client stub. It handles certain reference semantics like connection retries, which are unicast/multicast of the invocation requests. The transport layer actually provides the networking infrastructure, which facilitates the actual data transfer during method invocations, the passing of fomal arguments, and the return of back execution results. RMI skeleton. The RMI skeleton, which also is generated using the RMI compiler (rmic) receives the invocation requests from the stub and processes the arguments (unmarshalling) and delegates them to the RMI server. Upon successful method execution, it marshals the return values and then passes them back to the RMI stub via the RMI infrastructure. RMI server. The server is the Java remote object that implements the exposed interfaces and executes the client requests. It receives incoming remote method invocations from the respective skeleton, which passes the parameters after unmarshalling. Upon successful method execution, return values are sent back to the skeleton, which passes them back to the client via the RMI infrastructure.

Microsoft DCOM

The Microsoft Component Object Model (COM) provides a way for Windows-based software components to communicate with each other by defining a binary and network standard in a Windows operating environment. COM evolved from OLE (Object Linking and Embedding), whichemployed a Windows registry-based object organization mechanism. COM provides a distributed application model for ActiveX components. As a next step, Microsoft developed the Distributed Common Object Model (DCOM) as its answer to the distributed computing problem in the Microsoft Windows platform. DCOM enables COM applications to communicate with each other using an RPC mechanism, which employs a DCOM protocol on the wire. Figure 1.5 shows an architectural model of DCOM. DCOM applies a skeleton and stub approach whereby a defined interface that exposes the methods of a COM object can be invoked remotely over a network. The client application will invoke methods on such a remote COM object in the same fashion that it would with a local COM object. The stub encapsulates the network location information of the COM server object and acts as a proxy on the client side. The servers can potentially host multiple COM objects, and when they register themselves against a registry, they become available for all the clients, who then discover them using a lookup mechanism.

■ Quality of Service (QoS) goals like Scalability, Performance, and Availability in a distributed environment consume a major portion of the application’s development time. ■ Interoperability of applications implementing different protocols on heterogeneous platforms almost becomes impossible. For example, a DCOM client communicating to an RMI server or an RMI client communicating to a DCOM server. ■ Most of these protocols are designed to work well within local networks. They are not very firewall friendly or able to be accessed over the Internet.

The Role of J2EE and XML in Distributed Computing

The emergence of the Internet has helped enterprise applications to be easily accessible over the Web without having specific client-side software installations. In the Internetbased enterprise application model, the focus was to move the complex business processing toward centralized servers in the back end. The first generation of Internet servers was based upon Web servers that hosted static Web pages and provided content to the clients via HTTP (HyperText Transfer Protocol). HTTP is a stateless protocol that connects Web browsers to Web servers, enabling the transportation of HTML content to the user. With the high popularity and potential of this infrastructure, the push for a more dynamic technology was inevitable. This was the beginning of server-side scripting using technologies like CGI, NSAPI, and ISAPI. With many organizations moving their businesses to the Internet, a whole new category of business models like business-to-business (B2B) and business-to-consumer (B2C) came into existence. This evolution lead to the specification of J2EE architecture, which promoted a much more efficient platform for hosting Web-based applications. J2EE provides a programming model based upon Web and business components that are managed by the J2EE application server. The application server consists of many APIs and low-level services available to the components. These low-level services provide security, transactions, connections and instance pooling, and concurrency services, which enable a J2EE developer to focus primarily on business logic rather than plumbing. The power of Java and its rich collection of APIs provided the perfect solution for developing highly transactional, highly available and scalable enterprise applications. Based on many standardized industry specifications, it provides the interfaces to connect with various back-end legacy and information systems. J2EE also provides excellent client connectivity capabilities, ranging from PDA to Web browsers to Rich Clients (Applets, CORBA applications, and Standard Java Applications). Figure 1.7 shows various components of the J2EE architecture. A typical J2EE architecture is physically divided in to three logical tiers, which enables clear separation of the various application components with defined roles and responsibilities. The following is a breakdown of functionalities of those logical tiers: Presentation tier. The Presentation tier is composed of Web components, which handle HTTP quests/responses, Session management, Device independent content delivery, and the invocation of business tier components.

Application tier. The Application tier (also known as the Business tier) deals with the core business logic processing, which may typically deal with workflow and automation. The business components retrieve data from the information systems with well-defined APIs provided by the application server. Integration tier. The Integration tier deals with connecting and communicating to backend Enterprise Information Systems (EIS), database applications and legacy applications, or mainframe applications.

UNIT 2 Emergence

of Web Services

Today, the adoption of the Internet and enabling Internet-based applications has created a world of discrete business applications, which co-exist in the same technology space but without interacting with each other. The increasing demands of the industry for enabling B2B, application-toapplication (A2A), and inter-process application communication has led to a growing requirement for service-oriented architectures. Enabling service- oriented applications facilitates the exposure of business applications as service components enable business applications from other organizations to link with these services for application interaction and data sharing without human intervention. By leveraging this architecture, it also enables interoperability between business applications and processes. By adopting Web technologies, the service-oriented architecture model facilitates the delivery of services over the Internet by leveraging standard technologies such as XML. It uses platform-neutral standards by exposing the underlying application components and making them available to any application, any platform, or any device, and at any location. Today, this phenomenon is well adopted for implementation and is commonly referred to as Web services. Although this technique enables communication between applications with the addition of service activation technologies and open technology standards, it can be leveraged to publish the services in a register of yellow pages available on the Internet. This will further redefine and transform the way businesses communicate over the Internet. This promising new technology sets the strategic vision of the next generation of virtual business models and the unlimited potential for organizations doing business collaboration and business process management over the Internet.

What Are Web Services

Web services are based on the concept of serviceoriented architecture (SOA). SOA is the latest evolution of distributed computing, which enables software components, including application functions,

■ Web services provide a cross-platform integration of business applications over the Internet. ■ To build Web services, developers can use any common programming language, such as Java, C, C++, Perl, Python, C#, and/or Visual Basic, and its existing application components. ■ Web services are not meant for handling presentations like HTML context—it is developed to generate XML for uniform accessibility through any software application, any platform, or device. ■ Because Web services are based on loosely coupled application components, each component is exposed as a service with its unique functionality. ■ Web services use industry-standard protocols like HTTP, and they can be easily accessible through corporate firewalls. ■ Web services can be used by many types of clients. ■ Web services vary in functionality from a simple request to a complex business transaction involving multiple resources. ■ All platforms including J2EE, CORBA, and Microsoft .NET provide extensive support for creating and deploying Web services. ■ Web services are dynamically located and invoked from public and private registries based on industry standards such as UDDI and ebXML.