Introduction to Data Communication & Networking
Data communication refers to the exchange of data between a source and a receiver via form of transmission media such as a wire cable. Data communication is said to be local if communicating devices are in the same building or a similarly restricted geographical area.
The meanings of source and receiver are very simple. The device that transmits the data is known as source and the device that receives the transmitted data is known as receiver. Data communication aims at the transfer of data and maintenance of the data during the process but not the actual generation of the information at the source and receiver.
Datum mean the facts information statistics or the like derived by calculation or experimentation. The facts and information so gathered are processed in accordance with defined systems of procedure. Data can exist in a variety of forms such as numbers, text, bits and bytes. The Figure is an illustration of a simple data communication system.
The term data used to describe information, under whatever form of words you will be using.
data communication system may collect data from remote locations
through data transmission circuits, and then outputs processed results
to remote locations. Figure provides a broader view of data
communication networks. The different data communication techniques
which are presently in widespread use evolved gradually either to
improve the data communication techniques already existing or to replace
the same with better options and features. Then, there are data
communication jargons to contend with such as baud rate, modems,
routers, LAN, WAN, TCP/IP, ISDN, during the selection of communication
systems. Hence, it becomes necessary to review and understand these
terms and gradual development of data communication methods.
History of Internet:-
This may be considered as the breakthrough for many of current ideas, algorithms and Internet technologies. It started Paul Baran in 1960s funded by Advanced Research Projects Agency (ARPA), an organization of the united States Defense Department and, therefore, named as Advanced Research Projects Agency Network (ARPANET) predecessor of the modern Internet. It was world’s first fully operational packet switching computer network
and the world’s first successful computer network to implement the
TCP/IP reference model that was used earlier by ARPANET, before being
used in the Internet. The ARPANET is the first network that planed the
seed of interent.
was built to accommodate research equipment on packet switching
technology and to allow resource sharing for the Department of Defense’s
contractors. The network interconnected research centers, some military
bases and government locations. It soon became popular with researchers
for collaboration through electronic mail and other services.
It is basically a WAN. It was developed by the ARPA (Advanced Research
Project Agency) in 1968 which is the research arm of 000.
• ARPANET was designed to service even a nuclear attack.
• Before ARPANET, the networks were basically the telephone networks which operated on the circuit switching principle.
• But this network was too vulnerable, because the loss of even one line or switch would terminate all the conversations.
• ARPANET used the concept of packet switching network consisting of subnet and host computers.
• The subnet was a datagram subnet and each subnet consists of minicomputers called IMPs (Interface Message Processors).
• Each node of the network used to have an IMP and a host connected by a short wire.
• The host could send messages of upto 8063 bits to its IMP which would break them into packets and forward them independently toward the destination.
• The subnet was the first electronic store-and-forward type packet switched network. So each packet was stored before it was forwarded.
• The software for ARPANET was split into two parts namely subnet and host.
• In 1974 the TCP/IP model and protocol were invented specifically to handle communication over internetwork because more and more networks were getting connected to ARPANET.
The TCP/IP made the connection of LANs to ARPANET easy.
• During 1980s so many LANs were connected to ARPANET that finding hosts became increasingly difficult and expensive.
• So DNS (Domain Naming System) was created for organizing machines into domains and map host names onto IP address.
• .In 1983 the management of ARPANET was handed over to the Defense Communications Agency (DCA) which separated the military portion into a separate MILNET.
• By 1990 the ARPANET was shut down and dismantled, however MILNET continues to operate.
The National Science Foundation Network (NSFNet) is a wide area network that was developed by the National Science Foundation to replace ARPANET as the main network linking government and research facilities.
NSFNet was a major force in the development of computing infrastructure and enhanced network services. By making high-speed networking available to national computer centers and inter-linked regional networks, NSFNet created a network of networks, which laid the foundation for today’s Internet.
NSFNet was dismantled in 1995 and replaced with a commercial Internet backbone.
Internet usage Architecture of the internet
In the late 1960s, the US Department of Defense decides to make a large network from a multitude of small networks, all different, which begin to abound everywhere in North America. We had to find a way to these networks coexist and give them an outdoor visibility, the same for all users. Hence the name of InterNetwork (interline), abbreviated as Internet, data this network of networks.
The Internet architecture is based on a simple idea: ask all networks want to be part of carrying a single packet type, a specific format the IP protocol. In addition, this IP packet must carry an address defined with sufficient generality in order to identify each computer and terminals scattered throughout the world. This architecture is illustrated in Figure.
The user who wishes to make on this internetwork must store its data in IP packets that are delivered to the first network to cross. This first network encapsulates the IP packet in its own packet structure, the package A, which circulates in this form until an exit door, where it is decapsulated so as to retrieve the IP packet. The IP address is examined to locate, thanks to a routing algorithm, the next network to cross, and so on until arriving at the destination terminal.
To complete the IP, the US Defense added the TCP protocol; specify the nature of the interface with the user. This protocol further determines how to transform a stream of bytes in an IP packet, while ensuring quality of transport this IP packet. Both protocols, assembled under the TCP / IP abbreviation, are in the form of a layered architecture. They correspond to the packet level and message-level reference model.
The Internet model is completed with a third layer, called the application level, which includes different protocols on which to build Internet services. Email (SMTP), the file transfer (FTP), the transfer of hypermedia pages, transfer of distributed databases (World Wide Web), etc., are some of these services. Figure shows the three layers of the Internet architecture.
IP packets are independent of each other and are individually routed in the network by interconnecting devices subnets, routers. The quality of service offered by IP is very small and offers no detection of lost or possibility of error recovery packages.
TCP combines the functionality of message-level reference model. This is a fairly complex protocol, which has many options for solving all packet loss problems in the lower levels. In particular, a lost fragment can be recovered by retransmission on the stream of bytes. TCP uses a connection-oriented mode.
Data communication is a process of transferring data electronically from one place to another. Data can be transferred by using different medium. The basic components of data communications are as follows:
- Medium/ communication channel
- Encoder and decoder
The message is the data or information to be communicated. It may consist of text, number, pictures, sound, video or any a combination of these.
Sender is a device that sends message. The message can consist of text, numbers, pictures etc. it is also called source or transmitter. Normally, computer is use as sender in information communication systems.
Receiver is a device that receives message. It is also called sink. The receiver can be computer, printer or another computer related device. The receiver must be capable of accepting the message.
Medium is the physical path that connects sender and receiver. It is used to transmit data. The medium can be a copper wire, a fiber optic cable, microwaves etc. it is also called communication channel.
Encoder and decoder
The encoder is a device that converts digital signals in a form that can pass through a transmission medium. The decoder is a device that converts the encoded signals into digital form. The receiver can understand the digital form of message. Sender and receiver cannot communicate successfully without encoder and decoder.
Computers always use binary numbers to represent everything. This is because electric system are naturally binary. Something is either on or off, 1 or 0.
Instead, it is interesting to understand how binary numbers are put on a cable. Cables can’t be on an off, so you need to put something in place. We have several techniques, but the most important is modulation.
With modulation, you take an electric wave and manipulate its amplitude and/or frequency to represent binary numbers. This is what signals does, in a simple way.
Communication means the transfer or exchange of data between two different devices.In Communication process whenever data transfers it depends on the condition or depends on devices that through which type data will transfer.
Data Flow in communication have the following types:
In simplex data flow only in one direction.Its mean in simplex if two devices are connected only one device will send data the other device will only recieve data it can not send.
In this type channel will use all of its capacity only in sending data.
Example of this type is:Mouse(it can only input data etc)
In this type of data flow,data will flow in both directions but not at the same time.For example:If two devices are connected both of them can send information to each other but not at the same time.When one device will send data the other will recieve it cannot send back at the same time after recieving it can send data.
In half duplex channel will use all of its capacity for each direction.So this type will be used in the communication in which there is no need of response at the same time.Example of this type is Walkie Talkies.
In Full Duplex data will flow in both directions at the same time.For Example: If two devices are connected in communication both of them can send and recieve data at the same time.
In Full Duplex channel will divide all of its capacity in both directions.Full Duplex is used when communication is required in both directions at the same time.Example of Full Duplex is calling on mobile phone etc.
A Communication model is used to exchange data between two parties. For example: communication between a computer, server and telephone (through modem).
Communication Model: Source
Data to be transmitted is generated by this device, example: telephones, personal computers etc.
Communication Model: Transmitter
The data generated by the source system is not directly transmitted
in the form its generated. The transmitter transforms and encodes the
data in such a form to produce electromagnetic waves or signals.
Communication Model: Transmission System
A transmission system can be a single transmission line or a complex network connecting source and destination.
Communication Model: Receiver
Receiver accepts the signal from the transmission system and converts
it into a form which is easily managed by the destination device.
Communication Model: Destination
Destination receives the incoming data from the receiver.
Modern world scenario is ever changing. Data Communication and network have changed the way business and other daily affair works. Now, they highly rely on computer networks and internetwork.
A set of devices often mentioned as nodes connected by media link is called a Network.
A node can be a device which is capable of sending or receiving data generated by other nodes on the network like a computer, printer etc. These links connecting the devices are called Communication channels.
Computer network is a telecommunication channel using which we can share data with other coomputers or devices, connected to the same network. It is also called Data Network. The best example of computer network is Internet.
Computer network does not mean a system with one Control Unit connected to multiple other systems as its slave. That is Distributed system, not Computer Network.
A network must be able to meet certain criterias, these are mentioned below:
Computer Networks: Performance
It can be measured in the following ways:
- Transit time : It is the time taken to travel a message from one device to another.
- Response time : It is defined as the time elapsed between enquiry and response.
Other ways to measure performance are :
- Efficiency of software
- Number of users
- Capability of connected hardware
Computer Networks: Reliability
It decides the frequency at which network failure take place. More the failures are, less is the network’s reliability.
Computer Networks: Security
It refers to the protection of data from any unauthorised user or access. While travelling through network, data passes many layers of network, and data can be traced if attempted. Hence security is also a very important characteristic for Networks.
Uses of a computer network
- Information and Resource Sharing: Computer networks allow organizations having units which are placed apart from each other, to share information in a very effective manner. Programs and software in any computer can be accessed by other computers linked to the network. It also allows sharing of hardware equipment, like printers and scanners among varied users.
- Retrieving Remote Information: Through computer networks, users can retrieve remote information on a variety of topics. The information is stored in remote databases to which the user gains access through information systems like the World Wide Web.
- Speedy Interpersonal Communication: Computer networks have increased the speed and volume of communication like never before. Electronic Mail (email) is extensively used for sending texts, documents, images, and videos across the globe. Online communications have increased by manifold times through social networking services.
- E-Commerce: Computer networks have paved way for a variety of business and commercial transactions online, popularly called e-commerce. Users and organizations can pool funds, buy or sell items, pay bills, manage bank accounts, pay taxes, transfer funds and handle investments electronically.
- Highly Reliable Systems: Computer networks allow systems to be distributed in nature, by the virtue of which data is stored in multiple sources. This makes the system highly reliable. If a failure occurs in one source, then the system will still continue to function and data will still be available from the other sources.
- Cost–Effective Systems: Computer networks have reduced the cost of establishment of computer systems in organizations. Previously, it was imperative for organizations to set up expensive mainframes for computation and storage. With the advent of networks, it is sufficient to set up interconnected personal computers (PCs) for the same purpose.
- VoIP: VoIP or Voice over Internet protocol has revolutionized telecommunication systems. Through this, telephone calls are made digitally using Internet Protocols instead of the regular analog phone lines.
The major criteria that a Data Communication Network must meet are:
Performance is the defined as the rate of transferring error free data. It is measured by the Response Time. Response Time is the elasped time between the end of an inquiry and the beginning of a response. Request a file transfer and start the file transfer. Factors that affect Response Time are:
- Number of Users: More users on a network – slower the network will run
- Transmission Speed: speed that data will be transmitted measured in bits per second (bps)
- Media Type: Type of physical connection used to connect nodes together
- Hardware Type: Slow computers such as XT or fast such as Pentiums
- Software Program: How well is the network operating system (NOS) written
Consistency is the predictability of response time and accuracy of data.
- Users prefer to have consistent response times, they develop a feel for normal operating conditions. For example: if the “normal” response time is 3 sec. for printing to a Network Printer and a response time of over 30 sec happens, we know that there is a problem in the system!
- Accuracy of Data determines if the network is reliable! If a system loses data, then the users will not have confidence in the information and will often not use the system.
Reliability is the measure of how often a network is useable. MTBF (Mean Time Between Failures) is a measure of the average time a component is expected to operate between failures. Normally provided by the manufacturer. A network failure can be: hardware, data carrying medium and Network Operating System.
Recovery is the Network’s ability to return to a prescribed level of operation after a network failure. This level is where the amount of lost data is nonexistent or at a minimum. Recovery is based on having Back-up Files.
Security is the protection of Hardware, Software and Data from unauthorized access. Restricted physical access to computers, password protection, limiting user privileges and data encryption are common security methods. Anti-Virus monitoring programs to defend against computer viruses are a security measure.
A Network Topology is the arrangement with which computer systems or
network devices are connected to each other. Topologies may define
both physical and logical aspect of the network. Both logical and
physical topologies could be same or different in a same network.
Point-to-point networks contains exactly two hosts such as computer, switches or routers, servers connected back to back using a single piece of cable. Often, the receiving end of one host is connected to sending end of the other and vice-versa.
If the hosts are connected point-to-point logically, then may have multiple intermediate devices. But the end hosts are unaware of underlying network and see each other as if they are connected directly.
In case of Bus topology, all devices share single communication line or cable.Bus topology may have problem while multiple hosts sending data at the same time. Therefore, Bus topology either uses CSMA/CD technology or recognizes one host as Bus Master to solve the issue. It is one of the simple forms of networking where a failure of a device does not affect the other devices. But failure of the shared communication line can make all other devices stop functioning.
Both ends of the shared channel have line terminator. The data is sent in only one direction and as soon as it reaches the extreme end, the terminator removes the data from the line.
All hosts in Star topology are connected to a central device, known as hub device, using a point-to-point connection. That is, there exists a point to point connection between hosts and hub. The hub device can be any of the following:
- Layer-1 device such as hub or repeater
- Layer-2 device such as switch or bridge
- Layer-3 device such as router or gateway
As in Bus topology, hub acts as single point of failure. If hub fails, connectivity of all hosts to all other hosts fails. Every communication between hosts, takes place through only the hub.Star topology is not expensive as to connect one more host, only one cable is required and configuration is simple.
In ring topology, each host machine connects to exactly two other machines, creating a circular network structure. When one host tries to communicate or send message to a host which is not adjacent to it, the data travels through all intermediate hosts. To connect one more host in the existing structure, the administrator may need only one more extra cable.
Failure of any host results in failure of the whole ring.Thus, every connection in the ring is a point of failure. There are methods which employ one more backup ring.
In this type of topology, a host is connected to one or multiple hosts.This topology has hosts in point-to-point connection with every other host or may also have hosts which are in point-to-point connection to few hosts only.
Hosts in Mesh topology also work as relay for other hosts which do not have direct point-to-point links. Mesh technology comes into two types:
- Full Mesh: All hosts have a point-to-point connection to every other host in the network. Thus for every new host n(n-1)/2 connections are required. It provides the most reliable network structure among all network topologies.
- Partially Mesh: Not all hosts have point-to-point connection to every other host. Hosts connect to each other in some arbitrarily fashion. This topology exists where we need to provide reliability to some hosts out of all.
Also known as Hierarchical Topology, this is the most common form of network topology in use presently.This topology imitates as extended Star topology and inherits properties of bus topology.
This topology divides the network in to multiple levels/layers of network. Mainly in LANs, a network is bifurcated into three types of network devices. The lowermost is access-layer where computers are attached. The middle layer is known as distribution layer, which works as mediator between upper layer and lower layer. The highest layer is known as core layer, and is central point of the network, i.e. root of the tree from which all nodes fork.
All neighboring hosts have point-to-point connection between them.Similar to the Bus topology, if the root goes down, then the entire network suffers even.though it is not the single point of failure. Every connection serves as point of failure, failing of which divides the network into unreachable segment.
This topology connects all the hosts in a linear fashion. Similar to Ring topology, all hosts are connected to two hosts only, except the end hosts.Means, if the end hosts in daisy chain are connected then it represents Ring topology.
Each link in daisy chain topology represents single point of failure. Every link failure splits the network into two segments.Every intermediate host works as relay for its immediate hosts.
A network structure whose design contains more than one topology is said to be hybrid topology. Hybrid topology inherits merits and demerits of all the incorporating topologies.
The above picture represents an arbitrarily hybrid topology. The combining topologies may contain attributes of Star, Ring, Bus, and Daisy-chain topologies. Most WANs are connected by means of Dual-Ring topology and networks connected to them are mostly Star topology networks. Internet is the best example of largest Hybrid topology
Types of Networks
Communication Networks can be of following 5 types:
- Local Area Network (LAN)
- Metropolitan Area Network (MAN)
- Wide Area Network (WAN)
- Inter Network (Internet)
Local Area Network (LAN)
It is also called LAN and designed for small physical areas such as an office, group of buildings or a factory. LANs are used widely as it is easy to design and to troubleshoot. Personal computers and workstations are connected to each other through LANs. We can use different types of topologies through LAN, these are Star, Ring, Bus, Tree etc.
LAN can be a simple network like connecting two computers, to share files and network among each other while it can also be as complex as interconnecting an entire building.
LAN networks are also widely used to share resources like printers, shared hard-drive etc.
Characteristics of LAN
- LAN’s are private networks, not subject to tariffs or other regulatory controls.
- LAN’s operate at relatively high speed when compared to the typical WAN.
- There are different types of Media Access Control methods in a LAN, the prominent ones are Ethernet, Token ring.
- It connects computers in a single building, block or campus, i.e. they work in a restricted geographical area.
Applications of LAN
- One of the computer in a network can become a server serving all the remaining computers called clients. Software can be stored on the server and it can be used by the remaining clients.
- Connecting Locally all the workstations in a building to let them communicate with each other locally without any internet access.
- Sharing common resources like printers etc are some common applications of LAN.
Advantages of LAN
- Resource Sharing: Computer resources like printers, modems, DVD-ROM drives and hard disks can be shared with the help of local area networks. This reduces cost and hardware purchases.
- Software Applications Sharing: It is cheaper to use same software over network instead of purchasing separate licensed software for each client a network.
- Easy and Cheap Communication: Data and messages can easily be transferred over networked computers.
- Centralized Data: The data of all network users can be saved on hard disk of the server computer. This will help users to use any workstation in a network to access their data. Because data is not stored on workstations locally.
- Data Security: Since, data is stored on server computer centrally, it will be easy to manage data at only one place and the data will be more secure too.
- Internet Sharing: Local Area Network provides the facility to share a single internet connection among all the LAN users. In Net Cafes, single internet connection sharing system keeps the internet expenses cheaper.
Disadvantages of LAN
- High Setup Cost: Although the LAN will save cost over time due to shared computer resources, but the initial setup costs of installing Local Area Networks is high.
- Privacy Violations: The LAN administrator has the rights to check personal data files of each and every LAN user. Moreover he can check the internet history and computer use history of the LAN user.
- Data Security Threat: Unauthorised users can access important data of an organization if centralized data repository is not secured properly by the LAN administrator.
- LAN Maintenance Job: Local Area Network requires a LAN Administrator because, there are problems of software installations or hardware failures or cable disturbances in Local Area Network. A LAN Administrator is needed at this full time job.
- Covers Limited Area: Local Area Network covers a small area like one office, one building or a group of nearby buildings.
Metropolitan Area Network (MAN)
It was developed in 1980s.It is basically a bigger version of LAN. It is also called MAN and uses the similar technology as LAN. It is designed to extend over the entire city. It can be means to connecting a number of LANs into a larger network or it can be a single cable. It is mainly hold and operated by single private company or a public company.
Characteristics of MAN
- It generally covers towns and cities (50 km)
- Communication medium used for MAN are optical fibers, cables etc.
- Data rates adequate for distributed computing applications.
Advantages of MAN
- Extremely efficient and provide fast communication via high-speed carriers, such as fibre optic cables.
- It provides a good back bone for large network and provides greater access to WANs.
- The dual bus used in MAN helps the transmission of data in both directions simultaneously.
- A MAN usually encompasses several blocks of a city or an entire city.
Disadvantages of MAN
- More cable required for a MAN connection from one place to another.
- It is difficult to make the system secure from hackers and industrial espionage(spying) graphical regions.
Wide Area Network (WAN)
It is also called WAN. WAN can be private or it can be public leased network. It is used for the network that covers large distance such as cover states of a country. It is not easy to design and maintain. Communication medium used by WAN are PSTN or Satellite links. WAN operates on low data rates.
Characteristics of WAN
- It generally covers large distances(states, countries, continents).
- Communication medium used are satellite, public telephone networks which are connected by routers.
Advantages of WAN
- Covers a large geographical area so long distance business can connect on the one network.
- Shares software and resources with connecting workstations.
- Messages can be sent very quickly to anyone else on the network. These messages can have picture, sounds or data included with them(called attachments).
- Expensive things(such as printers or phone lines to the internet) can be shared by all the computers on the network without having to buy a different peripheral for each computer.
- Everyone on the network can use the same data. This avoids problems where some users may have older information than others.
Disadvantages of WAN
- Need a good firewall to restrict outsiders from entering and disrupting the network.
- Setting up a network can be an expensive, slow and complicated. The bigger the network the more expensive it is.
- Once set up, maintaining a network is a full-time job which requires network supervisors and technicians to be employed.
- Security is a real issue when many different people have the ability to use information from other computers. Protection against hackers and viruses adds more complexity and expense.
Digital wireless communication is not a new idea. Earlier, Morse code was used to implement wireless networks. Modern digital wireless systems have better performance, but the basic idea is the same.
Wireless Networks can be divided into three main categories:
- System interconnection
- Wireless LANs
- Wireless WANs
System interconnection is all about interconnecting the components of a computer using short-range radio. Some companies got together to design a short-range wireless network called Bluetooth to connect various components such as monitor, keyboard, mouse and printer, to the main unit, without wires. Bluetooth also allows digital cameras, headsets, scanners and other devices to connect to a computer by merely being brought within range.
In simplest form, system interconnection networks use the master-slave concept. The system unit is normally the master, talking to the mouse, keyboard, etc. as slaves.
These are the systems in which every computer has a radio modem and antenna with which it can communicate with other systems. Wireless LANs are becoming increasingly common in small offices and homes, where installing Ethernet is considered too much trouble. There is a standard for wireless LANs called IEEE 802.11, which most systems implement and which is becoming very widespread.
The radio network used for cellular telephones is an example of a low-bandwidth wireless WAN. This system has already gone through three generations.
- The first generation was analog and for voice only.
- The second generation was digital and for voice only.
- The third generation is digital and is for both voice and data.
Inter Network or Internet is a combination of two or more networks. Inter network can be formed by joining two or more individual networks by means of various devices such as routers, gateways and bridges.
Applications of Internet.
The Internet has many important applications. Of the various services available via the Internet, the three most important are e-mail, web browsing, and peer-to-peer services . E-mail, also known as electronic mail, is the most widely used and successful of Internet applications. Web browsing is the application that had the greatest influence in dramatic expansion of the Internet and its use during the 1990s. Peer-to-peer networking is the newest of these three Internet applications, and also the most controversial, because its uses have created problems related to the access and use of copyrighted materials.
Whether judged by volume, popularity, or impact, e-mail has been and continues to be the principal Internet application. This is despite the fact that the underlying technologies have not been altered significantly since the early 1980s. In recent years, the continuing rapid growth in the use and volume of e-mail has been fueled by two factors. The first is the increasing numbers of Internet Service Providers (ISPs) offering this service, and secondly, because the number of physical devices capable of supporting e-mail has grown to include highly portable devices such as personal digital assistants (PDAs) and cellular telephones.
The volume of e-mail also continues to increase because there are more users, and because users now have the ability to attach documents of various types to e-mail messages. While this has long been possible, the formulation of Multipurpose Internet Mail Extensions (MIME) and its adoption by software developers has made it much easier to send and receive attachments, including word-processed documents, spreadsheets, and graphics. The result is that the volume of traffic generated by e-mail, as measured in terms of the number of data packets moving across the network, has increased dramatically in recent years, contributing significantly to network congestion.
E-mail has become an important part of personal communications for hundreds of millions of people, many of whom have replaced it for letters or telephone calls. In business, e-mail has become an important advertising medium, particularly in instances where the demand for products and services is time sensitive. For example, tickets for an upcoming sporting event are marketed by sending fans an e-mail message with information about availability and prices of the tickets. In addition, e-mail serves, less obviously, as the basis for some of the more important collaborative applications that have been developed, most notably Lotus Notes.
In the near future, voice-driven applications will play a much larger role on the Internet, and e-mail is sure to be one of the areas in which voice-driven applications will emerge most rapidly. E-mail and voice mail will be integrated, and in the process it seems likely that new models for Internet- based messaging will emerge.
Synchronous communication, in the form of the highly popular “instant messaging,” may be a precursor of the messaging models of the near future. Currently epitomized by AOL Instant Messenger and Microsoft’s Windows Messenger, instant messaging applications generally allow users to share various types of files (including images, sounds, URLs ), stream content, and use the Internet as a medium for telephony, as well as exchanging messages with other users in real time and participating in online chat rooms.
The web browser is another Internet application of critical importance. Unlike e-mail, which was developed and then standardized in the early, noncommercial days of the Internet, the web browser was developed in a highly commercialized environment dominated by such corporations as Microsoft and Netscape, and heavily influenced by the World Wide Web Consortium (W3C). While Microsoft and Netscape have played the most obvious parts in the development of the web browser, particularly from the public perspective, the highly influential role of the W3C may be the most significant in the long term.
Founded in 1994 by Tim Berners-Lee, the original architect of the web, the goal of the W3C has been to develop interoperable technologies that lead the web to its full potential as a forum for communication, collaboration, and commerce. What the W3C has been able to do successfully is to develop and promote the adoption of new, open standards for web-based documents. These standards have been designed to make web documents more expressive (Cascading Stylesheets), to provide standardized labeling so that users have a more explicit sense of the content of documents (Platform for Internet Content Selection, or PICS), and to create the basis for more interactive designs (the Extensible Markup Language, or XML ). Looking ahead, a principal goal of the W3C is to develop capabilities that are in accordance with Berners-Lee’s belief that the web should be a highly collaborative information space.
Microsoft and Netscape dominate the market for web browsers, with Microsoft’s Internet Explorer holding about three-quarters of the market, and Netscape holding all but a small fraction of the balance. During the first few years of web growth, the competition between Microsoft and Netscape for the browser market was fierce, and both companies invested heavily in the development of their respective browsers. Changes in business conditions toward the end of the 1990s and growing interest in new models of networked information exchange caused each company to focus less intensely on the development of web browsers, resulting in a marked slowing of their development and an increasing disparity between the standards being developed by W3C and the support offered by Internet Explorer or Netscape Navigator.
Now, the future of the web browser may be short-lived, as standards developers and programmers elaborate the basis for network-aware applications that eliminate the need for the all-purpose browser. It is expected that as protocols such as XML and the Simple Object Access Protocol (SOAP) grow more sophisticated in design and functionality, an end user’s interactions with the web will be framed largely by desktop applications called in the services of specific types of documents called from remote sources.
The open source model has important implications for the future development of web browsers. Because open source versions of Netscape have been developed on a modular basis, and because the source code is available with few constraints on its use, new or improved services can be added quickly and with relative ease. In addition, open source development has accelerated efforts to integrate web browsers and file managers. These efforts, which are aimed at reducing functional distinctions between local and network-accessible resources, may be viewed as an important element in the development of the “seamless” information space that Berners-Lee envisions for the future of the web.
One of the fastest growing, most controversial, and potentially most important areas of Internet applications is peer-to-peer (P2P) networking. Peer-to-peer networking is based on the sharing of physical resources, such as hard drives, processing cycles, and individual files among computers and other intelligent devices. Unlike client-server networking, where some computers are dedicated to serving other computers, each computer in peer-to-peer networking has equivalent capabilities and responsibilities.
Internet-based peer-to-peer applications position the desktop at the center of a computing matrix, usually on the basis of “cross-network” protocols such as the Simple Object Access Protocol (SOAP) or XML-RPC (Remote Procedure Calling), thus enabling users to participate in the Internet more interactively.
There are two basic P2P models in use today. The first model is based on a central host computer that coordinates the exchange of files by indexing the files available across a network of peer computers. This model has been highly controversial because it has been employed widely to support the unlicensed exchange of commercial sound recordings, software, and other copyrighted materials. Under the second model, which may prove ultimately to be far more important, peer-to-peer applications aggregate and use otherwise idle resources residing on low-end devices to support high-demand computations. For example, a specially designed screensaver running on a networked computer may be employed to process astronomical or medical data.
The remarkable developments during the late 1990s and early 2000s suggest that making accurate predictions about the next generation of Internet applications is difficult, if not impossible. Two aspects of the future of the Internet that one can be certain of, however, are that network bandwidth will be much greater, and that greater bandwidth and its management will be critical factors in the development and deployment of new applications. What will greater bandwidth yield? In the long run, it is difficult to know, but in the short term it seems reasonable to expect new communication models, videoconferencing, increasingly powerful tools for collaborative work across local and wide area networks, and the emergence of the network as a computational service of unprecedented power.
Protocols & Standards
Protocols are essential for communication, authentication and error detection. TCP/IP is the most common protocol but in-fact is two distinct protocols combined together to preform different jobs. Protocols allows computer to talk to each-other by setting ground rules, without these ground rules there would be no standard on how to communicate. If you imagine a phone conversation, you pick up and say hello, the other persons listens and replies back with hello while you listen, imagine that as a rule to listen while the other person is talking, If the rule wasnt there and you both speak at the same time then neither would be able to listen not being able to communicate, that is just like a protocol having rules in order to communicate.
Network standards are also ground rules that are set by commissions so that hardware is compatible among similar computers and assures interoperability. This is done to ensure that backwards compatibility and compatibility from vendor to vendor. It is necessary to have standards because if each company had its own protocol standards and didn’t allow it to talk with other protocols there would be a lack of communication from different machines and would result in one company being hugely successful and the other running out of business due to lack of being able to communicate with other machines. So this is why its necessary to have network standards and protocols because they are what allow different computers from different companies running different software to communicate with each-other making networking possible
The first computer networks were designed with the hardware as the main concern and the software as an afterthought.
• This strategy no longer works. Network software is now highly structured.
• To reduce their design complexity, most networks are organized as a series or hierarchy of layers or levels.
• The number of layers, the name of each layer, the contents of each layer, and the function of each layer differ from network to network.
• Layer n on one machine communicates with layer n on another machine on the network using an some rules known as the layer n protocol.
• A protocol is an agreement between the communicating parties on how the communication is to proceed.
• The entities comprising the corresponding layers on two communicating machines over the network are called peers.
• In realty, no data is transferred from layer n on any two machines. Instead, each data and control information is passed to the layer below.
• Additional information including protocol control information may be appended by each layer to data as it travels from higher to lower layers in the form of layer headers.
• Below layer 1 is the physical medium through which actual communication occur over communication channels.
• Between each pair of adjacent layers there is an interface.
• The interface defines which primitive operations and services the lower layer offers to the upper layer.
• The set of layers and associated protocols is called network architecture.
Design Issues of Layers
A number of design issues exist for the layer to layer approach of computer networks. Some of the main design issues are as follows:
Network channels and components may be unreliable, resulting in loss of bits while data transfer. So, an important design issue is to make sure that the information transferred is not distorted.
Networks are continuously evolving. The sizes are continually increasing leading to congestion. Also, when new technologies are applied to the added components, it may lead to incompatibility issues. Hence, the design should be done so that the networks are scalable and can accommodate such additions and alterations.
At a particular time, innumerable messages are being transferred between large numbers of computers. So, a naming or addressing system should exist so that each layer can identify the sender and receivers of each message.
Unreliable channels introduce a number of errors in the data streams that are communicated. So, the layers need to agree upon common error detection and error correction methods so as to protect data packets while they are transferred.
If the rate at which data is produced by the sender is higher than the rate at which data is received by the receiver, there are chances of overflowing the receiver. So, a proper flow control mechanism needs to be implemented.
Computer networks provide services in the form of network resources to the end users. The main design issue is to allocate and deallocate resources to processes. The allocation/deallocation should occur so that minimal interference among the hosts occurs and there is optimal usage of the resources.
It is not feasible to allocate a dedicated path for each message while it is being transferred from the source to the destination. So, the data channel needs to be multiplexed, so as to allocate a fraction of the bandwidth or time to each host.
There may be multiple paths from the source to the destination. Routing involves choosing an optimal path among all possible paths, in terms of cost and time. There are several routing algorithms that are used in network systems.
A major factor of data communication is to defend it against threats like eavesdropping and surreptitious alteration of messages. So, there should be adequate mechanisms to prevent unauthorized access to data through authentication and cryptography
Each protocol which communicates in a layered architecture (e.g. based on the OSI Reference Model) communicates in a peer-to-peer manner with its remote protocol entity.Communication between adjacent protocol layers (i.e. within the same communications node) are managed by calling functions, called primitives, between the layers. There are various types of actions that may be performed by primitives. Examples of primitives include: Connect , Data, Flow Control, and Disconnect.
Primitives for communications between peer protocol entities
Each primitive specifies the action to be performed or advises the result of a previously requested action. A primitive may also carry the parameters needed to perform its functions. One parameter is the packet to be sent/received to the layer above/below (or, more acurately, includes a pointer to data structures containing a packet, often called a “buffer”).
There are four types of primitive used for communicating data.The four basic types of primitive are :
- Request: A primitive sent by layer (N + 1 ) to layer N to request a service. It invokes the service and passes any required parameters.
- Indication: A primitive returned to layer (N + l) from layer N to advise of activation of a requested service or of an action initiated by the layer N service.
- Response: A primitive provided by layer (N + 1) in reply to an indication primitive. It may acknowledge or complete an action previously invoked by an indication primitive.
- Confirm: A primitive returned to the requesting (N + l)st layer by the Nth layer to acknowledge or complete an action previously invoked by a request primitive.
To send Data, the sender invokes a Data.Request specifying the packet to be sent, and the Service Access Point (SAP) of the layer below. At the receiver, a Data.Indication primitive is passed up to the corresponding layer, presenting the received packet to the peer protocol entity.
Connection oriented and connection less services Reference Model
Connection Oriented Services
There is a sequence of operation to be followed by the users of connection oriented service. These are:
- Connection is established.
- Information is sent.
- Connection is released.
In connection oriented service we have to establish a connection before starting the communication. When connection is established, we send the message or the information and then we release the connection.
Connection oriented service is more reliable than connectionless service. We can send the message in connection oriented service if there is an error at the receivers end. Example of connection oriented is TCP (Transmission Control Protocol) protocol.
Connection Less Services
It is similar to the postal services, as it carries the full address where the message (letter) is to be carried. Each message is routed independently from source to destination. The order of message sent can be different from the order received.
In connectionless the data is transferred in one direction from source to destination without checking that destination is still there or not or if it prepared to accept the message. Authentication is not needed in this. Example of Connectionless service is UDP (User Datagram Protocol) protocol.
ISO-OSI reference model
I think you are confused of what the difference between the two are. ISO and OSI are basically the same thing. ISO stands for International Standards Organization and they’re the people who built the OSI(Open System Interconnect) model based on their standards.
The OSI model is basically a blueprint for every network to follow so that different networks can connect with each other which is what the Internet is. The model has been used since 1984 when it was created by a team at ISO. It consists of 7 layers in the order:
This is where the cabling and any physical connections come into play, such as what type of cable to use or frequency if using wifi following the IEEE standards(802.11 for WiFi, 802.1 for ethernet). This layer has nothing to do with the devices themselves.
2. Data Link
Any switches and network cards is the main target at this layer. It determines what MAC address the data is trying to reach and sends it to its appropriate destination.
The network layer deals with the IP address part. MAC addresses are only used in local or Broadband connections. To connect to the Internet with all the billions of devices you need what is called a IP address that is provided by your ISP to be able to send data anywhere in the world.
4. Transport Layer
Easily put, this layer figures out how much data to send and how fast. It then bundles up the packets into what is called a frame and sends it off.
5. Session Layer
This layer determines how the data should travel(UDP, TCP). TCP(Transmission Control Protocol) happens at this layer. Basically it determines how the connection should be handled between a device and a server(if using TCP).
The data is converted(from packets) into readable data each app or program can use and understand.
Here is where the data is used in functions in an app to be able to communicate with each other. This can be a browser or a mail client you have running. This happens inside the app and is the last step used before the process repeats.
Most people who view a diagram will see this from the bottom up, starting at 7, but this is the order when everything is happening. Hopefully this helps you understand what is going on whenever you send data somewhere.
TCP/IP reference model.
TCP/IP means Transmission Control Protocol and Internet Protocol. It
is the network model used in the current Internet architecture as well. Protocols
are set of rules which govern every possible communication over a
network. These protocols describe the movement of data between the
source and destination or the internet. They also offer simple naming
and addressing schemes.
Protocols and networks in the TCP/IP model:
Overview of TCP/IP reference model
TCP/IP that is Transmission Control Protocol and Internet Protocol was developed by Department of Defence’s Project Research Agency (ARPA, later DARPA) as a part of a research project of network interconnection to connect remote machines.
The features that stood out during the research, which led to making the TCP/IP reference model were:
- Support for a flexible architecture. Adding more machines to a network was easy.
- The network was robust, and connections remained intact untill the source and destination machines were functioning.
The overall idea was to allow one application on one computer to talk to(send data packets) another application running on different computer.
Different Layers of TCP/IP Reference Model
Below we have discussed the 4 layers that form the TCP/IP reference model:
Layer 1: Host-to-network Layer
- Lowest layer of the all.
- Protocol is used to connect to the host, so that the packets can be sent over it.
- Varies from host to host and network to network.
Layer 2: Internet layer
- Selection of a packet switching network which is based on a connectionless internetwork layer is called a internet layer.
- It is the layer which holds the whole architecture together.
- It helps the packet to travel independently to the destination.
- Order in which packets are received is different from the way they are sent.
- IP (Internet Protocol) is used in this layer.
- The various functions performed by the Internet Layer are:
- Delivering IP packets
- Performing routing
- Avoiding congestion
Layer 3: Transport Layer
- It decides if data transmission should be on parallel path or single path.
- Functions such as multiplexing, segmenting or splitting on the data is done by transport layer.
- The applications can read and write to the transport layer.
- Transport layer adds header information to the data.
- Transport layer breaks the message (data) into small units so that they are handled more efficiently by the network layer.
- Transport layer also arrange the packets to be sent, in sequence.
Layer 4: Application Layer
The TCP/IP specifications described a lot of applications that were at the top of the protocol stack. Some of them were TELNET, FTP, SMTP, DNS etc.
- TELNET is a two-way communication protocol which allows connecting to a remote machine and run applications on it.
- FTP(File Transfer Protocol) is a protocol, that allows File transfer amongst computer users connected over a network. It is reliable, simple and efficient.
- SMTP(Simple Mail Transport Protocol) is a protocol, which is used to transport electronic mail between a source and destination, directed via a route.
- DNS(Domain Name Server) resolves an IP address into a textual address for Hosts connected over a network.
- It allows peer entities to carry conversation.
- It defines two end-to-end protocols: TCP and UDP
- TCP(Transmission Control Protocol): It is a reliable connection-oriented protocol which handles byte-stream from source to destination without error and flow control.
- UDP(User-Datagram Protocol): It is an unreliable connection-less protocol that do not want TCPs, sequencing and flow control. Eg: One-shot request-reply kind of service.
Merits of TCP/IP model
- It operated independently.
- It is scalable.
- Client/server architecture.
- Supports a number of routing protocols.
- Can be used to establish a connection between two computers.
Demerits of TCP/IP
- In this, the transport layer does not guarantee delivery of packets.
- The model cannot be used in any other application.
- Replacing protocol is not easy.
- It has not clearly separated its services, interfaces and protocols