Wide Area Network (WAN)

Types and Characteristics of WANs

What is a WAN?

There are two prevailing definitions of a Wide Area Network (WAN). The book definition of a WAN is a network that spans large geographical locations, typically to interconnect multiple Local Area Networks (LANs). The practical definition of a WAN is a network that traverses a public network or commercial carrier, using one of several WAN technologies

What are its Main Components?

The main components for a WAN are routers, switches and modems. These components are described below in the hardware section.

CPE – Devices on the subscriber contracts are called customer premises equipment (CPE).
The subscriber owns the CPE or leases the CPE from the service provider. A copper or fiber cable connects the CPE to the service provider's nearest exchange or central office. This cabling is often called the local loop, or "last-mile".

DTE / DCE – Devices that put data on the local loop are called data circuit-terminating equipment, or data communications equipment (DCE). The customer devices that pass the data to the DCE are called data terminal equipment (DTE). The DCE primarily provides an interface for the DTE into the communication link on the WAN cloud.

Hardware

In a WAN you will need various types of hardware components for it to function. The typical items of hardware that you will need in a WAN are:

Router – An electronic device that connects a local area network (LAN) to a wide area network (WAN) and handles the task of routing messages between the two networks. Operates at layer 3, and makes decisions using IP addresses.

Switch – A switch is a network device that selects a path or circuit for sending a unit of data to its next destination. Operates at layer 2, and uses MAC addresses to send data to correct destination.

Modem – Short for modulator / demodulator, a modem that enables a computer to communicate with other computers over telephone lines. Operates at layer 1, where signals are converted from digital to analogue and vice versa for transmission and receiving.

Wan Standards

WANs operate within the OSI model using layer 1 and layer 2 levels. The data link layer and the physical layer. The physical layer protocols describe how to provide electrical, mechanical and functional connections to the services provided by the ISP. The data link layer defines how data is encapsulated for transmission to remote sites.

Encapsulation

Encapsulation is the wrapping of data in a particular protocol header. Remember that WANs operate at the physical layer and the data link layer of the osi model and that higher layer protocols such as IP are encapsulated when sent across the WAN link. Serial interfaces support a wide range of WAN encapsulation types, which must be manually specified. These types include SDLC, PPP, Frame delay etc. Regardless of WAN encapsulation used it must be identical on both sides of the point to point link.

Packet and Circuit Switching

Circuit switching and packet switching are both used in high-capacity networks.
The majority of switched networks today get data across the network
Through packet switching.

Circuit-switching is more reliable than packet-switching. Circuit switching is old and expensive, packet switching is more modern.

General Routing Issues

What is a Routing Protocol?

A routing protocol is a protocol that specifies how routers communicate and exchange information on a network. Each router has prior knowledge of its immediate neighbors and knows the structure of the network topology. The routers know this because the routing protocol shares this information.

Protocol

RIP (Routing Information Protocol) was one of the most commonly used protocols on internal networks. Routers use RIP to dynamically adapt changes to the network connections and communicate information about which networks routers can reach and the distance between them. RIP is sometimes said to stand for Rest in Pieces in reference to the reputation that RIP has for breaking unexpectedly and rendering a network unable to function.

Routing Algorithms

Distance Vector

This type of routing protocol requires that each router simply inform its neighbors of its routing table. The distance vector protocol is also known as the bellman-ford algorithm.

Link State

This type of routing protocol requires that each router maintain a partial map of the network. The link state algorithm is also know as Dijkstra's algorithm.

IGRP

IGRP is a type of distance vector routing protocol invented by cisco used to exchange routing data in a autonomous system. Distance vector protocols measure distances and compare routes. Routers that use distance vector must send all or a portion of their routing table in a routing update message at regular intervals to each neighbor router.

Addressing and Routing

What does routing mean?

Routing is the process of deciding how to move packets from one network to another.
The directions also known as routes can be learned by a router using a routing protocol then the information is passed from router to router along the route of the destination.

IP Address's

Every machine connected to the internet is assigned an IP address. An example of an IP address would be 192.168.0.1. IP addresses are displayed in decimal format to make it easier for humans to understand but computers communicate in binary form. The four numbers that separate an IP address are called Octets. Each position consists of eight bits. When added to together you get 32 ​​bit address. The purpose of each octet in an IP address is to create classes of IP addresses that can be assigned within a network. There are three main classes that we deal with Class A, B and C. The octets of an IP address are split into two parts Network and Host. In a class A address the first octet is the network portion, this determinates which network the computer belongs to, the last octets of the address are the hosts that belong to the network.

Sub netting

Sub netting allows you to create multiple networks within a class A, B or C address. The subnet address is the address used by your LAN. In a Class C network address you would have a subnet mask of 255.255.255.0. A subnet mask identifies which portion is network and which is host. For example 192.168.6.15 the first octet three octets are the Network address and the last octet being the host (Workstation). It is important to subnet a network because gateways need to forward packets to other LANS. By giving each NIC on the gateway an IP address and a Subnet mask it allows the gateways to route packets from LAN to LAN. Once the packet arrives at its destination, the gateway then uses the bits of the subnet portion of the IP address to decide which LAN to send the packets.

Circuit Switched Leased Lines

A circuit switched network is one that establishes a dedicated circuit (or channel) between nodes and terminals before the users may communicate. Here are some terminologies associated with a Circuit switched network.

Frame relay is a telecommunication service designed for cost-efficient data transmission between local area networks (LANs)

Basic rate interference is a service used by small business for internet connectivity. An ISDN BRI provides two 64 Kbps digital channels to the user.
Primary rate interface (PRI) is a telecommunications standard for carrying voice and data transmissions between two locations
All data and voice channels are ISDN and operate at 64kbit / s

Packet Switching

Http://www.raduniversity.com/networks/2004/PacketSwitching/main.htm – _Toc80455261

Packet switching refers to protocols in which messages are broken up into small packets before they are sent. Each packet is then transmitted over the Internet. At the destination the packages are reassembled into the original message. Packet switching main difference from Circuit Switching is that that the communication lines are not dedicated to passing messages from the source to the destination. In Packet Switching, different messages can use the same network resources within the same time period.

Http://en.wikipedia.org/wiki/Asynchronous_Transfer_Mode

Asynchronous Transfer Mode (ATM) is a cell relay, packet switching network and protocolwhich encodes data into small fixed-sized cells.

ISDN is used to carry voice, data, video and images across a telephone network. ISDN stands for integrated services Digital Network. Isdn also provides users with a 128kbps bandwidth. This is done through frame relay. Frame relay complements and provides a service between ISDN, which offers bandwidth at 128 Kbps and Asynchronous Transfer Mode which operates in somewhat similar fashion to frame relay but at speeds from 155.520 Mbps or 622.080 Mbps. Frame relay is based on the older X.25 packet switching technology and is used to transmit analogue signals such as telephone conversations.

PSDN stands for packet switched data network and is a data communication network. Packet switched networks do not establish a physical communication signal like the public telephone circuit (circuit switched network) Packets are sent on a fixed length basis and assigned with a source and a destination address. The packages then explicitly on the routers to read the address and route the packets through the network.

Mobile and Broadband Services

Digital Subscriber Line (DSL) is mainly used to bring high bandwidth connections to homes and small business's over a copper wire telephone line. This is can only be achieved if you stay within the range of the telephone exchange. DSL offers download rates of up to 6mbps allowing continuous transmission of video, audio and 3D effects. DSL is set to replace ISDN and compete with the cable modem in providing multimedia to homes. DSL works by connecting your telephone line to the telephone office over copper wires that are twisted together.

Asymmetric Digital Subscribers Line is most commonly used for home users. It provides a high download speed but a lower upload speed. Using ADSL, up to 6.1 megabits per second of data can be sent downstream and up to 640 Kbps upstream.

Http://en.wikipedia.org/wiki/Symmetric_Digital_Subscriber_Line

Symmetric Digital Subscriber Line is a digital subcriber line which runs over one pair of copper wires. The main difference between ADSL and SDSL is the difference in upload and download speeds. SDSL allows the same upstream data rate and downstream data rate as ADSL upstream can be very slow.

[Http://searchnetworking.techtarget.com/sDefinition/0],,sid7_gci558545,00.html

HDSL High bit-rate Digital Subscriber Line, one of the earliest forms of DSL, is used for wideband digital transmission within a corporate site and between the telephone company and a customer. The main characteristic of HDSL is that that provides equal bandwidth in both directions.

IDSL is a system in which data is transmitted at 128 Kbps on a regular copper telephone line from a user to a destination using digital transmission.

The Local Loop enables operators to connect directly to the consumer via copper local loops and then add their own equipment to offer broadband and other services. This process involves operators accessing local exchange buildings to connect to a network of copper lines which connect them to homes and businesses. BT is an Example of a Local Exchange. The local loop connecting the telephone exchange to most subscribers is capable of carrying frequencies well beyond the 3.4 kHz upper limit.

Benefits of using DSL

DSL can provide a virtual instantaneous transmission of voice, data and video over ordinary copper phone lines. A DSL connection can eliminate delays while waiting to download information and graphics from the Internet. It provides users with a cost effective high speed Internet connection. Another benefit is that a DSL connection is always on-line (like a LAN connection) with no waiting time for dialing or connecting.

There are now more than 10 million broadband connections in the UK. By December 2005 there were 9.792 million broadband connections in the UK and the average broadband take up rate during the three months to December was more than 70,000 per week.

Posted in general | Comments Off on Wide Area Network (WAN)

How Car Mechanics Use Technology

Do you know why it’s important to find a good mechanic or a good repair shop to repair your car? The reason: it could save you hundreds of dollars in parts and labor! When we say a “good mechanic” or a “good repair shop”, we’re referring to mechanics who are equipped with the latest technology. You don’t want to entrust your car to a shop with outdated machinery and equipment because it will take longer to arrive at a diagnosis. And the longer a mechanic takes to make a diagnosis, the larger your bill for labor!

Remember that car mechanics charge by the hour.

Car repairs can cover any one or a combination of the following (note that this is not a complete repair list).

• Air conditioning checks

• Air bag checks

• Electrical wiring

• Cables

• Clutch service and repair

• Transmission repair

• Wheel Alignment

• Suspension

• Brakes

• Heating system

• Oil and lubrication

• Battery

• Power windows

Much of the troubleshooting that mechanics use to diagnose a problem is facilitated by car repair technology.

Ever noticed how some lights on your panel come on when something’s wrong? In most cases, an experienced mechanic will know immediately what the problem is, but there are instances where it will take sophisticated technology to lead to an accurate diagnosis.

Sophisticated technology comes in many forms and one popular one is troubleshooting software. Companies like Auto Tech have a software program that car owners can use to find out what is ailing their vehicle.

Most car owners who were cynical about software programs before should seriously consider purchasing a reliable car troubleshooting software program. For instance, some programs start out by asking you to input your car make and model number, the year of the car and what kinds of equipment it has. The software features a large database of information about all cars in the market and a car owner simply follows the steps when prompted. It employs what the industry calls a “tree diagnosis” where logical steps take you through the entire diagnosis process.

After you’ve keyed in your car’s profile, you use the drop down menu to choose the symptoms that your car is exhibiting. For instance, if you choose “squealing brakes”, the program mimics the squealing of brakes and if that’s the sound you hear, you confirm it and the program recommends a series of steps.

You may not really want to use a troubleshooting software, but imagine how much time and money you could save if you spoke to the mechanic intelligently, letting him know that you’re in the know.

Students who are studying towards certification buy certain tools of the trade. A couple of examples are Snap On and Mac Tools. These two are the most popular in the United States and Canada.

Mechanics now have a wide range of technology tools to help them understand car problems better: digital multimeters (electronic measuring), boroscopes (testing heat exchangers), fuel diagnostic testers (testing and balancing fuel injectors) and other such new technologies.

New car mechanic technologies help shorten the learning curve and speed up diagnosis so that the car owner isn’t saddled with too many labor hours.

Posted in general | Comments Off on How Car Mechanics Use Technology

Can You Upgrade a Maya Student Version to a Full Commercial Edition?

The Maya 3D software from Autodesk has long-been a staple tool for the production of the highest level of Visual Effects and animation for Oscar-winning blockbuster movies, TV shows and video games.

Students who are looking to secure themselves commercial work upon graduation these days typically need to show some familiarity with this software…ideally the more the better, since most companies these days use Maya as their main software package, and many have it deeply integrated into their own proprietary pipelines and tools – meaning that the situation is unlikely to change for the foreseeable future.

Students who take the opportunity currently afforded them of purchasing a copy of the Maya Student Edition do well since they are able to improve their skillset on the full Unlimited version of the software, and do so at a huge discount (the student edition typically retails for around $350).

However, a number of students wonder if they also qualify for a discount later on if they wish to upgrade to a full commercial edition of the Maya software. Jumping from graduation to commercial work can be a challenge, so any way to alleviate the expense of setting yourself up as a freelancer is more than welcome…but can it actually be done?

Upgrade Maya Student Edition To Commercial License?

Yes, you’ll be pleased to know that there is a program available where current owners of a Maya student license are able to upgrade to the full commercial license at a massive discount of around 60%…in other words, a full commercial license of Maya would typically cost around $3,900, but student license holders are able to upgrade their current license to a full commercial one for just $1,300.

This is huge, and can be a real lifesaver for folks who want to go it alone and start their own freelance business but are wary of the huge costs involved.

In fact, if you’re smart about it and don’t mind a little negotiation, the license could even be purchased with a down-payment from your first client before you even begin working on their project.

Such a huge discount on the student to commercial upgrade is a great incentive for students, especially at this economic time, and it also makes purchasing the Maya student edition in the first place a sensible option, since you get access to these kinds of discounted upgrades.

Posted in general | Comments Off on Can You Upgrade a Maya Student Version to a Full Commercial Edition?

3 Main Causes of Kernel Errors

A kernel error is a failure in some code critical to Windows. If you have ever encountered a Blue Screen of Death (BSoD), then you have seen a kernel error. Windows is actually several layers of programs made to work together. You can think of Windows as if it were your body, with many pieces working together to make a whole, and, like your body, some parts of Windows are more important than others.

The kernel is the most important part of Windows. It includes critical programs to handle things like memory management and device drivers for the graphics card. These programs are like a body's heart and brain. If something in the kernel crashes, it will often cause all of Windows to crash.

Software Failures

Because there are a lot of programs in the kernel, there are many opportunities for bugs to appear. Although Microsoft does extensive testing to get rid of bugs, their testing facilities can not run through all the combinations that billions of computers use with Windows when some bugs get through.

However, many of the kernel failures are in device drivers written by companies that make hardware, not by Microsoft. Your graphics card, for example, probably uses a driver created by the video company. These companies often work with Microsoft to test their drivers, but having companies working together adds an additional layer of complexity.

Hardware Failures

A hardware failure can cause a kernel error. If your graphics card fails, it can send bad data to the graphics device driver, which then crashes, creating a kernel error. If your hard disk fails, it can corrupt files used by Windows and cause the programs that use those files to crash.

Registry Failures

Registry failures can cause kernel errors. The registry is a database of information that Windows uses to store information about programs. If the registry gets corrupt, the programs that use it can cause kernel errors.

Registry corruption can come from either software or hardware failures. Software corruption can come from a bug in one of the programs that writes information out to the registry. Or if you turn off your computer without doing a complete shutdown, the registry files may not get completely written to the disk. Hardware corruption can happen when the hard disk fails causing parts of the registry files to be lost. It's a good idea to do some research on kernel errors and other registry issues.

Posted in general | Comments Off on 3 Main Causes of Kernel Errors

The Role of Technology in Education

In the current age we live in, technology has become an important component. Every day there is some new gadget or software that makes lives easier and improves on the technology and software that already exists. Making lives easier is not, however, the only role technology plays in our lives.

Technology is playing an increasing role in education. As technology advances, it is used to benefit students of all ages in the learning process.

Technology used in the classroom helps students adsorb the material. For example, since some people are visual learners, projection screens linked to computers can allow students to see their notes instead of simply listening to a teacher deliver a lecture.

Software can be used to supplement class curriculum. The programs provide study questions, activities, and even tests and quizzes for a class that can help students continue learning outside the classroom.

Technology has also become part of many curriculums, even outside of computer and technology classes. Students use computers to create presentations and use the Internet to research topics for papers and essays.

Students also learn to use the technology available to them in computer and tech classes. This ensures that after graduation they will be able to use the technology in a work setting, which may put them ahead of someone who did not have access to a particular technology or software in their own school setting.

As technology advances, students have better access to educational opportunities like these. When something new and "better" is disclosed, the "older" technology becomes more affordable, allowing it to be used in educational settings, even when schools are on a tight budget.

Technology has also advanced to help children even before they've started school. Educational video games and systems for young children helps them prepare for school and in some cases get a head start on their education.

There are people who may say children are "spoiled" by technology. Instead of being able to add a long column of numbers in their heads, for example, they turn to a calculator. Regardless of these arguments, technology is an important part of today's society. By incorporating it into the classroom, students will be better equipped to transition from the classroom to the work place.

Posted in general | Comments Off on The Role of Technology in Education

How Useful is CAD Software to Engineers and Architects?

The emergence of advanced technology has made people today dependent on machines. Using computers and software, for example, is a very common illustration of this. Computer experts are coming up with more and more software to make more and more jobs easier.

A more specific illustration of this can be found in the modern approach towards engineering and architecture. These days, professionals in these fields use CAD computer software which is a program that allows them to create designs faster, easier and with more accurate measurements. Aside from the convenience that CAD software brings, it also helps put architects and engineers ahead of their competitors. CAD, which can render designs that are two-dimensional or three dimensional, stands for Computer Aided Design and has been in use since 1982.

So how does CAD computer software work? And what does it do exactly to help engineers and architects? The program is actually multifaceted in the sense that there are many ways it can help. To make CAD work will require, however, a careful study of its features and the many ways it can be used. It is rather a complex yet flexible and highly functional program.

This article will not be enough to discuss the various ways that CAD works but pinpointing its advantages could give some very good ideas. One great advantage of CAD computer software is its easy-to-use tools in the creation and alteration of designs. Obviously, this is so much better than the old fashioned way of using a pencil and eraser directly on paper. This method of designing is obviously so much easier and engineers and architects simply have more time to finish other tasks. In other words, high productivity is going to be the main end result of using CAD.

Before the design is actually printed on paper, CAD also allows both the design professional and the client to preview what has been finished so far. Any alterations can be made simply by manipulating the drawing through the use of the software. With CAD, it is so much easier to spot errors because the designs can be rendered exactly as they would be in reality. Hence, modifications can be done even before printing, thus, allowing one to save.

With the tough competition that everyone has to face these days, it is wise to take advantage of new technologies that can help put them ahead in the race. While traditional methods hold a significant part in the history of design, advanced tools such as CAD software should only be welcomed as man’s way of furthering development in a field of expertise that he himself has created long ago.

Posted in general | Comments Off on How Useful is CAD Software to Engineers and Architects?

The Importance Of Excel In The Workplace

Excel is perhaps the most important computer software program used in the workplace today. That’s why so many workers and prospective employees are required to learn Excel in order to enter or remain in the workplace.

From the viewpoint of the employer, particularly those in the field of information systems, the use of Excel as an end-user computing tool is essential. Not only are many business professionals using Excel to perform everyday functional tasks in the workplace, an increasing number of employers rely on Excel for decision support.

In general, Excel dominates the spreadsheet product industry with a market share estimated at 90 percent. Excel 2007 has the capacity for spreadsheets of up to a million rows by 16,000 columns, enabling the user to import and work with massive amounts of data and achieve faster calculation performance than ever before.

Outside the workplace, Excel is in broad use for everyday problem solving.

Let’s say you have a home office. You can use Excel to calculate sales tax on a purchase, calculate the cost of a trip by car, create a temperature converter, calculate the price of pizza per square inch and do analysis of inputted data. You can track your debt, income and assets, determine your debt to income ratio, calculate your net worth, and use this information to prepare for the process of applying for a mortgage on a new house. The personal uses for Excel are almost as endless as the business uses for this software – and an Excel tutorial delves into the practical uses of the program for personal and business use.

The use of spreadsheets on computers is not new. Spreadsheets, in electronic form, have been in existence since before the introduction of the personal computer. Forerunners to Excel and Lotus 1-2-3 were packages such as VisiCalc, developed and modeled on the accountant’s financial ledger. Since 1987, spreadsheet programs have been impacting the business world. Along the way, computerized spreadsheets have become a pervasive and increasingly effective tool for comparative data analysis throughout the world.

Today, end users employ Excel to create and modify spreadsheets as well as to author web pages with links and complex formatting specifications. They create macros and scripts. While some of these programs are small, one-shot calculations, many are much more critical and affect significant financial decisions and business transactions.

Widely used by businesses, service agencies, volunteer groups, private sector organizations, scientists, students, educators, trainers, researchers, journalists, accountants and others, Microsoft Excel has become a staple of end users and business professionals.

The beauty of Excel is that it can be used as a receiver of workplace or business data, or as a calculator, a decision support tool, a data converter or even a display spreadsheet for information interpretation. Excel can create a chart or graph, operate in conjunction with Mail Merge functions, import data from the Internet, create a concept map and sequentially rank information by importance.

Excel offers new data analysis and visualization tools that assist in analyzing information, spotting trends and accessing information more easily than in the past. Using conditional formatting with rich data display schemes, you can evaluate and illustrate important trends and highlight exceptions with colored gradients, data bars and icons.

Indeed, Excel can be customized to perform such a wide variety of functions that many businesses can’t operate without it. Excel training has become mandatory in many workplaces; in fact, computer software training is a must for any workplace trying to keep up with the times.

Let’s say you’re an employer with 97 workers, 17 of whom called in sick today, and you want to know the percentage represented by absentees. Excel can do that. You can learn Excel and use it to determine the ratio of male to female employees, the percentage of minorities on the payroll, and the ranking of each worker by compensation package amount, including the percentages of that package according to pay and benefits. You can use Excel to keep track of production by department, information that may assist you in future development plans. You can create additional spreadsheets to track data on vendors and customers while maintaining an ongoing inventory of product stock.

Let’s say you want to know your business production versus cost. You don’t have to be a math wiz – you just have to learn Excel. Excel allows you to input all of the data, analyze it, sort it according to your customized format, and display the results with color, shading, backgrounds, icons and other gimmicks that offer time-saving assistance in later locating precisely the information desired. If this spreadsheet is for presentation purposes, Excel helps you put it together in such a visually appealing way that the data may seem to pop and sparkle.

The single most important thing an employer may do is learn Excel – it is one of the most essential tools of the workplace.

Excel and Microsoft are trademarks of Microsoft Corporation, registered in the U.S. and other countries. Lotus is a registered trademark of International Business Machines Corporation in the U.S. and/or other countries.

Posted in general | Comments Off on The Importance Of Excel In The Workplace

What Are the Advantages of Using a Web Based Project Management System

Every project needs a large number of people for the completion of any task. Projects are of several kinds, and there capacity level is also variant. In order to help the managers control all tasks, management system was introduced. Project management system is used to make the managers feel easy. The first type of a project management system is manual.

However, nowadays much web-based project management software is available in the market. There is no need to process data manually. Everything is computerized; managers just have to enter the basic information into the system. Nowadays, lots of project managers are opting for web-based project management systems. They have access to the web-based project management software’s, and tools.

Using a web-based management system has several benefits. Project managers can reach their computers from anywhere, not only computers, but they can also contact their team members and check the progress of work. Discussing any problems that arise with the team is a lot easier. Team members can also interact with each other via e-mail.

Project scheduling is the basic solution of web-based task management plan in a large organisation. In many ways a web-based activity management plan can help your managers achieve optimum results. Whether a project is based on finance, marketing, construction, or information technology (IT), and web-based project management plan can help t.

Web-based activity management plan helps managers to make a proper scheduling plan of the project. Web-based software has many tools which help in managing time, and activities. The software includes spreadsheets, network diagrams, or Gantt charts to control the task management scheme.

In project-management scheme, HTML, ASP, or PHP are the supported languages coded into the software, and browser. The team can access it through a web browser. Moreover, main software is installed on to the server for multiple clients.

Project management scheme helps managers to supervise all the team easily. If the manager finds any team member late in the completion of the task, he/she can track the problem, and change that member, thus, avoiding any delays.. Web-based project-management-system enables the mangers to distribute the workload according to the capability of human resource (HR). In addition, he/she can monitor the performance of each person involved in the completion of a job. This web-based project management system also enables the manager to measure the achievement, and performance of the team in accordance to the strategy chalked down for completion, or achievement of the target.

Web-based project-management system keeps the human resources satisfied from the point of view that whatever performance he/she is given is being registered, and is not over looked. If, a company is using a custom-made programme, which is flexible to different projects, it remains cost effective, and is not a burden on the bottom line. For different projects, which have different dynamics, companies may need a tailor-made programme for them, This can be a little costly, but it ensures better management, proper monitoring, and timely completion of tasks, ultimately ensuring good performances. Nowadays, this web-based project software is a very important tool for the management of any project. Furthermore, using the correct project management scheme, and software, can help managers to manage their project smoothly, and effectively.

Posted in general | Comments Off on What Are the Advantages of Using a Web Based Project Management System

Five Reasons to Study Forensic Accounting

Forensic accounting is the practice of utilizing accounting auditing and investigative skills to assist in legal matters to obtain an accurate result to establish the accountability for administrative proceeding. You may be wondering, why study forensic accounting?

Well, here are the five reasons:

  • Our current economic crisis has left many companies to face serious financial issues that may lead to bankruptcy. Hence, these companies have been forced to stoop down to the lowest level to save their company by committing frauds and swindles. This makes such a job an important one that increases in demand each year.
  • Internal audit in the company could not throw light on the different fact and other hidden aspects of the corporate fraud. They are hardly in a position to initiate proper action at proper time due to their lack of forensic accounting skills.
  • Forensic accounting is a new and very exciting study. This change the perspective of the world on accounting study, which has been a theoretically dull field in itself.
  • If you are ambitious, fast, observant, creative and diligent, Forensic accounting is definitely a dream job and a great investment. Using computer technology, creative thinking, and careful inspection of financial records; the hidden proof of the crimes can be discovered.
  • You will always be equipped with the latest computer software and gadgets. Forensic accounting heavily relies on computer software and generalized audit software to aid in the detection and investigation of fraud and white-collar crimes. Also, investigative tools such as data mining, link analysis software and case management software and the use of the Internet are the essential skills as well.

In conclusion, forensic accounting has been stereotyped as a boring and uninteresting job which has been proved to be wrong. There are many reasons which show to be benefits when it comes to studying forensic accounting. Not only will you be rewarded with a stable job, you would also look forward to going to work everyday

Posted in general | Comments Off on Five Reasons to Study Forensic Accounting

Why Do We Need Software Engineering?

To understand the necessity for software engineering, we must pause briefly to look back at the recent history of computing. This history will help us to understand the problems that started to become obvious in the late sixties and early seventies, and the solutions that have led to the creation of the field of software engineering. These problems were referred to by some as “The software Crisis,” so named for the symptoms of the problem. The situation might also been called “The Complexity Barrier,” so named for the primary cause of the problems. Some refer to the software crisis in the past tense. The crisis is far from over, but thanks to the development of many new techniques that are now included under the title of software engineering, we have made and are continuing to make progress.

In the early days of computing the primary concern was with building or acquiring the hardware. Software was almost expected to take care of itself. The consensus held that “hardware” is “hard” to change, while “software” is “soft,” or easy to change. According, most people in the industry carefully planned hardware development but gave considerably less forethought to the software. If the software didn’t work, they believed, it would be easy enough to change it until it did work. In that case, why make the effort to plan?

The cost of software amounted to such a small fraction of the cost of the hardware that no one considered it very important to manage its development. Everyone, however, saw the importance of producing programs that were efficient and ran fast because this saved time on the expensive hardware. People time was assumed to save machine time. Making the people process efficient received little priority.

This approach proved satisfactory in the early days of computing, when the software was simple. However, as computing matured, programs became more complex and projects grew larger whereas programs had since been routinely specified, written, operated, and maintained all by the same person, programs began to be developed by teams of programmers to meet someone else’s expectations.

Individual effort gave way to team effort. Communication and coordination which once went on within the head of one person had to occur between the heads of many persons, making the whole process very much more complicated. As a result, communication, management, planning and documentation became critical.

Consider this analogy: a carpenter might work alone to build a simple house for himself or herself without more than a general concept of a plan. He or she could work things out or make adjustments as the work progressed. That’s how early programs were written. But if the home is more elaborate, or if it is built for someone else, the carpenter has to plan more carefully how the house is to be built. Plans need to be reviewed with the future owner before construction starts. And if the house is to be built by many carpenters, the whole project certainly has to be planned before work starts so that as one carpenter builds one part of the house, another is not building the other side of a different house. Scheduling becomes a key element so that cement contractors pour the basement walls before the carpenters start the framing. As the house becomes more complex and more people’s work has to be coordinated, blueprints and management plans are required.

As programs became more complex, the early methods used to make blueprints (flowcharts) were no longer satisfactory to represent this greater complexity. And thus it became difficult for one person who needed a program written to convey to another person, the programmer, just what was wanted, or for programmers to convey to each other what they were doing. In fact, without better methods of representation it became difficult for even one programmer to keep track of what he or she is doing.

The times required to write programs and their costs began to exceed to all estimates. It was not unusual for systems to cost more than twice what had been estimated and to take weeks, months or years longer than expected to complete. The systems turned over to the client frequently did not work correctly because the money or time had run out before the programs could be made to work as originally intended. Or the program was so complex that every attempt to fix a problem produced more problems than it fixed. As clients finally saw what they were getting, they often changed their minds about what they wanted. At least one very large military software systems project costing several hundred million dollars was abandoned because it could never be made to work properly.

The quality of programs also became a big concern. As computers and their programs were used for more vital tasks, like monitoring life support equipment, program quality took on new meaning. Since we had increased our dependency on computers and in many cases could no longer get along without them, we discovered how important it is that they work correctly.

Making a change within a complex program turned out to be very expensive. Often even to get the program to do something slightly different was so hard that it was easier to throw out the old program and start over. This, of course, was costly. Part of the evolution in the software engineering approach was learning to develop systems that are built well enough the first time so that simple changes can be made easily.

At the same time, hardware was growing ever less expensive. Tubes were replaced by transistors and transistors were replaced by integrated circuits until micro computers costing less than three thousand dollars have become several million dollars. As an indication of how fast change was occurring, the cost of a given amount of computing decreases by one half every two years. Given this realignment, the times and costs to develop the software were no longer so small, compared to the hardware, that they could be ignored.

As the cost of hardware plummeted, software continued to be written by humans, whose wages were rising. The savings from productivity improvements in software development from the use of assemblers, compilers, and data base management systems did not proceed as rapidly as the savings in hardware costs. Indeed, today software costs not only can no longer be ignored, they have become larger than the hardware costs. Some current developments, such as nonprocedural (fourth generation) languages and the use of artificial intelligence (fifth generation), show promise of increasing software development productivity, but we are only beginning to see their potential.

Another problem was that in the past programs were often before it was fully understood what the program needed to do. Once the program had been written, the client began to express dissatisfaction. And if the client is dissatisfied, ultimately the producer, too, was unhappy. As time went by software developers learned to lay out with paper and pencil exactly what they intended to do before starting. Then they could review the plans with the client to see if they met the client’s expectations. It is simpler and less expensive to make changes to this paper-and-pencil version than to make them after the system has been built. Using good planning makes it less likely that changes will have to be made once the program is finished.

Unfortunately, until several years ago no good method of representation existed to describe satisfactorily systems as complex as those that are being developed today. The only good representation of what the product will look like was the finished product itself. Developers could not show clients what they were planning. And clients could not see whether what the software was what they wanted until it was finally built. Then it was too expensive to change.

Again, consider the analogy of building construction. An architect can draw a floor plan. The client can usually gain some understanding of what the architect has planned and give feed back as to whether it is appropriate. Floor plans are reasonably easy for the layperson to understand because most people are familiar with the drawings representing geometrical objects. The architect and the client share common concepts about space and geometry. But the software engineer must represent for the client a system involving logic and information processing. Since they do not already have a language of common concepts, the software engineer must teach a new language to the client before they can communicate.

Moreover, it is important that this language be simple so it can be learned quickly.

Posted in general | Comments Off on Why Do We Need Software Engineering?