Сборник текстов на казахском, русском, английском языках для формирования навыков по видам речевой деятельности обучающихся уровней среднего образования



бет14/87
Дата28.01.2018
өлшемі18,66 Mb.
#34871
1   ...   10   11   12   13   14   15   16   17   ...   87

Design
Today, creating an academic website goes hand-in-hand with creating your CV and presenting who you are to your academic and professional peers. Creating and maintaining your website is an essential tool in disseminating your research and publications. Use your academic personal website to highlight your personality, profile, research findings, publications, achievements, affiliations and more. In addition, by using some of the many social media tools available, you can further amplify the information contained in your website.
An academic personal website takes you a step further in terms of increasing your visibility because it is an ideal place to showcase your complete research profile. You will attract attention to your publications, your name recognition will increase and you will get cited more. Moreover, a website is also useful for networking and collaborating with others, as well as for job searching and application.
Data storage
Online storage is an emerging method of data storage and back-up. A remote server with a network connection and special software backs up files, folders, or the

320


entire contents of a hard drive. There are many companies that provide a web-based backup.
One offsite technology in t h is area is loud computing. This allows colleagues in an organization to share resources, software and information over the Internet.
Continuous backup and storage on a remote hard drive eliminates the risk of data loss as a result of fire, flood or theft. Remote data storage and back-up providers encrypt the data and set up password protection to ensure maximum security.
Small businesses and individuals choose to save data in a more traditional way. External drives, disks and magnetic tapes are very popular data data storage solutions. USB or flash methods are very practical with small volumes of data storage and backup. However, they are not very reliable and do not protect the user in case of a disaster.
Types of network
Dear Agatha
Following our meeting last week, please find my recommendations for your business. I think you should set up a LAN, or Local Area Network, and a WAN, or Wide Area Network, for your needs. A LAN connects devices over a small area, for example your apartment and the shop. In addition, you should connect office equipment, such as the printer, scanner and fax machine, to your LAN because you can then share these devices between users. I'd recommend that we connect the LAN to a WAN so you can link to the Internet and sell your products. In addition I'd recommend we set up a Virtual Private Network so that you have a remote access to your company's LAN, when you travel.

VPN is a private network that uses a public network, usually the Internet, to connect remote sites or users together.


Let's meet on Friday to discuss these recommendations.

Best regards

Katharina
The Digital Divide
A recent survey has shown that the number of people in the United Kingdom who do not intend to get internet access has risen. These people, who are known as 'net refuseniks', make up 44% of UK households, or 11.2 million people in total.
The research also showed that more than 70 percent of these people said that they were not interested in getting connected to the internet. This number has risen from just over 50% in 2005, with most giving lack of computer skills as a reason for not getting internet access, though some also said it was because of the cost.
More and more people are getting broadband and high speed net is available almost everywhere in the UK, but there are still a significant number of people who refuse to take the first step.
The cost of getting online is going down and internet speeds are increasing, so many see the main challenge to be explaining the relevance of the internet to this

321


group. This would encourage them to get connected before they are left too far behind. The gap between those who have access to and use the internet is the digital divide, and if the gap continues to widen, those without access will get left behind and miss out on many opportunities, especially in their careers.
The First Computer Programmer
Ada Lovelace was the daughter of the poet Lord Byron. She was taught by Mary Somerville, a well-known researcher and scientific author, who introduced her to Charles Babbage in June 1833. Babbage was an English mathematician, who first had the idea for a programmable computer.
In 1842 and 1843, Ada translated the work of an Italian mathematician, Luigi Menabrea, on Babbage's Analytical Engine. Though mechanical, this machine was an important step in the history of computers; it was the design of a mechanical general-purpose computer. Babbage worked on it for many years until his death in 1871. However, because of financial, political, and legal issues, the engine was never built. The design of the machine was very modern; it anticipated the first completed general-purpose computers by about 100 years.
When Ada translated the article, she added a set of notes which specified in complete detail a method for calculating certain numbers with the Analytical Engine, which have since been recognized by historians as the world's first computer program. She also saw possibilities in it that Babbage hadn't: she realised that the machine could compose pieces of music. The computer programming language 'Ada', used in some aviation and military programs, is named after her.
Atom-sized transistor created by scientists By David Derbyshire, Science Correspondent
Scientists have shrunk a transistor to the size of a single atom, bringing closer the day of microscopic electronic devices that will revolutionise computing, engineering and medicine.
Researchers at Cornell University, New York, and Harvard University, Boston, fashioned the two "nano-transistors" from purpose -made molecules. When voltage was applied, electrons flowed through a single atom in each molecule.
The ability to use individual atoms as components of electronic circuits marks a key breakthrough in nano-technology, the creation of machines at the smallest possible size.
Prof Paul McEuen, a physicist at Cornell, who reports the breakthrough in today's issue of Nature, said the single-atom transistor did not have all the functions of a conventional transistor such as the ability to amplify.
But it had potential use as a chemical sensor to any change in its environment.
Basic principles of information security
Key concepts. For over twenty years, information security has held

322


confidentiality, integrity and availability (known as the CIA triad) to be the core principles of information security. There is continuous debate about extending this classic trio. Other principles such as Accountability have sometimes been proposed for addition. It has been pointed out that issues such as Non-Repudiation1 do not fit well within the three core concepts, and as regulation of computer systems has increased (particularly amongst the Western nations) Legality is becoming a key consideration for practical security installations in 1992. In 2002 the OECD's2 Guidelines for the Security of Information Systems and Networks proposed the nine generally accepted principles: Awareness, Responsibility, Response, Ethics, 21 Democracy, Risk Assessment, Security Design and Implementation, Security Management, and Reassessment. Based upon those, in 2004 the NIST's3 Engineering Principles for Information Technology Security proposed 33 principles. From each of these derived guidelines and practices in 2002, Donn Parker proposed an alternative model for the classic CIA4 triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility.
Confidentiality. Confidentiality is the term used to prevent the disclosure of information to unauthorized individuals or systems. For example, a credit card transaction on the Internet requires the credit card number to be transmitted from the buyer to the merchant and from the merchant to a transaction processing network. The system attempts to enforce confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in databases, log files5 , backups6 , printed receipts, and so on), and by restricting access to the places where it is stored. If an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred. Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at your computer screen while you have confidential data displayed on it could be a breach of confidentiality. If a laptop computer containing sensitive information about a company's employees is stolen or sold, it could result in a breach of confidentiality7 . Giving out confidential information over the telephone is a breach of confidentiality if the caller is not authorized to have the information. Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information a system holds.
Integrity. In information security, integrity means that data cannot be modified undetectably. This is not the same thing as referential integrity8 in databases, although it can be viewed as a special case of Consistency as understood in the classic ACID model of transaction processing. Integrity is violated when a message is actively modified in transit. Information security systems typically provide message integrity in addition to data confidentiality. Availability. For any information system to serve its purpose, the information must be available when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks9 . Authenticity10 . In computing, e-business and information security it is necessary to

323


ensure that the data, transactions, communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate that both parties involved are who they claim they are. 22 Non-repudiation. In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. Electronic commerce uses technology such as digital signatures and public key encryption11 to establish authenticity and non-repudiation.
Risk management
Risk management is the process of identifying vulnerabilities1 and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization.
There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing iterative2 process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerability emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made or act of nature) that has the potential to cause harm.
The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called residual risk.
A risk assessment3 is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis.

The research has shown that the most vulnerable point in most information systems is the human user, operator, designer. The practice of information security management recommends the following to be examined during a risk assessment:


security policy; organization of information security;

asset management4 ;

human resources security;

physical and environmental security; communications and operations management; access control;


324


information systems acquisition, development and maintenance; information security incident management5 ;
business continuity management;

regulatory compliance6 .

In broad terms, the risk management process consists of:


  1. Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, other), supplies.




        1. Conduct a threat assessment. Include: acts of nature, acts of war, accidents, malicious acts originating from inside or outside the organization.




      1. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, quality control, technical security.




        1. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis.




      1. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset.




      1. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost-effective protection without discernible loss of productivity.

For any given risk, Executive Management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or 24 out-sourcing7 to another business. The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. This is itself a potential risk.

When Management chooses to mitigate a risk, they will do so by implementing one or more of three different types of controls.
Administrative. Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on how the business is to be run and how day to day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards and guidelines that must be followed – the Payment Card Industry (PCI) Data Security Standard required by Visa and Master Card is such an example. Other examples of administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary policies. Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls. Administrative controls are of paramount importance.
Logical. Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. For example: passwords, network and host8 based firewalls9 , network intrusion detection systems, access control lists, and data encryption are logical controls. An important logical

325


control that is frequently overlooked is the principle of least privilege. The principle of least privilege requires that an individual, program or system process is not granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read e-mail and surf the Web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, or they are promoted to a new position, or they transfer to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges which may no longer be necessary or appropriate.
Physical. Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities. For example: doors, locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and work place into functional areas are also physical controls.
An important physical control that is frequently overlooked is the separation of duties. Separation of duties ensures that an individual cannot complete a critical task by himself. For example: an employee who submits a request for reimbursement10 should not also be able to authorize payment or print the check. An applications programmer should not also be the server administrator or the database administrator

– these roles and responsibilities must be separated from one another. 25


Defense in-depth
Information security must protect information throughout the life span of the information, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its lifetime, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own protection mechanisms. The building up, layering2 on and overlapping3 of security measures is called defense in depth. The strength of any system is no greater than its weakest link. Using a defence in-depth strategy, should one defensive measure fail, there are other defensive measures in place that continue to provide protection.
The three types of the above mentioned controls (administrative, logical, and physical) can be used to form the basis upon which to build a defense-in-depth strategy. With this approach, defense-in-depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense-in- depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, and network security, hostbased security and application security forming the outermost layers of the onion. Both perspectives are equally valid and each provides valuable insight into the

326


implementation of a good defense-in-depth strategy. 26 Security classification for information. An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification.
The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification.

Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information.

The type of information security classification labels selected and used will depend on the nature of the organization, with examples being:
In the business sector, labels such as: Public, Sensitive, Private, Confidential.
In the government sector, labels such as: Unclassified, Sensitive But Unclassified, Restricted, Confidential, Secret, Top Secret and their non-English equivalents.
In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber and Red. All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification of a particular information asset has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place.
Access control. Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected – the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation, on which access control mechanisms are built, start with identification4 and authentication5 .
Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. 27

Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe (a claim of


327


identity). The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be.

There are three different types of information that can be used for authentication: something you know, something you have, or something you are. Examples of something you know include such things as a PIN, a password, or your mother's maiden name. Examples of something you have include a driver's license or a magnetic Something you are refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and retina (eye) scans. Strong authentication requires providing information from two of the three different types of authentication information. For example, something you know plus something you have. This is called two factor authentication.


On computer systems in use today, the Username is the most common form of identification and the Password is the most common form of authentication. Usernames and passwords have served their purpose but in our modern world they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated authentication mechanisms.
After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization6 .
Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies.
Different computing systems are equipped with different kinds of access control mechanisms - some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control or it may be derived from a combination of the three approaches.
The non-discretionary7 approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary8 approach gives the creator or owner of the information resource the ability to control access to those resources. In the Mandatory access control9 approach, access is granted or denied basing upon the security classification assigned to the information resource.
Digital mapping
Digital mapping (also called digital cartography) is the process by which a collection of data is compiled and formatted into a virtual image. The primary function of this technology is to produce maps that give accurate representations of a

328


particular area, detailing major road arteries and other points of interest. The technology also allows the calculation of distances from one place to another. Though digital mapping can be found in a variety of computer applications, such as Google 50 Earth, the main use of these maps is with the Global Positioning System, or GPS satellite network, used in standard automotive navigation systems.

History. The roots of digital mapping lie within traditional paper maps. Paper maps provide basic landscapes similar to digitized road maps, yet are often cumbersome, cover only a designated area, and lack many specific details such as road blocks. In addition, there is no way to “update” a paper map except to obtain a new version. On the other hand, digital maps, in many cases, can be updated through synchronization with updates from company servers. Early digital maps had the same basic functionality as paper maps – that is, they provided a “virtual view” of roads generally outlined by the terrain encompassing the surrounding area. However, as digital maps have grown with the expansion of GPS technology in the past decade, live traffic updates, points of interest and service locations have been added to enhance digital maps to be more “user conscious”. Traditional “virtual views” are now only part of digital mapping. In many cases, users can choose between virtual maps, satellite (aerial views), and hybrid (a combination of virtual map and aerial views) views. With the ability to update and expand digital mapping devices, newly constructed roads and places can be added to appear on maps.


Data Collection. Digital maps heavily rely upon a vast amount of data collected over time. Most of the information that comprise digital maps is the culmination of satellite imagery1 as well as street level information. Maps must be updated frequently to provide users with the most accurate reflection of a location. While there is a wide spectrum on companies that specialize in digital mapping, the basic premise is that digital maps will accurately portray roads as they actually appear to give "life-like experiences2 ".
Functionality and Use. Computer programs and applications such as Google Earth and Google Maps provide map views from space and street level of much of the world. Used primarily for recreational use, Google Earth provides digital mapping in personal applications, such as tracking distances or finding locations. The development of mobile computing (tablet PCs3 , laptops, etc.) has recently (since about 2000) spurred the use of digital mapping in the sciences and applied sciences. As of 2009, science fields that use digital mapping technology include geology, engineering, architecture, land surveying, mining, forestry, environment, and archaeology. The principal use by which digital mapping has grown in the past decade has been its connection to Global Positioning System (GPS) technology. GPS is the foundation behind digital mapping navigation systems. The coordinates and position as well as atomic time obtained by a terrestrial GPS receiver from GPS satellites orbiting the Earth interact together to provide the digital mapping programming with points of origin in addition to the destination points needed to calculate distance. This information is then analyzed and compiled to create a map that provides the easiest and most efficient way to reach a destination. More technically speaking, the device operates in the following manner: GPS receivers collect data from "at least twenty-four GPS satellites" orbiting the Earth, calculating

329


position in three dimensions.


  1. The GPS receiver then utilizes position to provide GPS coordinates, or exact points of latitudinal and longitudinal direction from GPS satellites.




  1. The points, or coordinates, output an accurate range between approximately "10-20 meters" of the actual location.




  1. The beginning point, entered via GPS coordinates, and the ending point, (address or coordinates) input by the user, are then entered into the digital map.




      1. The map outputs a real-time visual representation of the route. The map then moves along the path of the driver.




    1. If the driver drifts from the designated route, the navigation system will use the current coordinates to recalculate a route to the destination location.


Computers
Generally, any device that can perform numerical calculations, even an adding machine, may be called a computer but nowadays this term is used especially for digital computers. Computers that once weighed 30 tons now may weigh as little as 1.8 kilograms. Microchips and microprocessors have considerably reduced the cost of the electronic components required in a computer. Computers come in many sizes and shapes such as special-purpose, laptop, desktop, minicomputers, supercomputers.

Special -purpose computers can perform specific tasks and their operations are limited to the programmes built into their microchips. There computers are the basis for electronic calculators and can be found in thousands of electronic products, including digital watches and automobiles. Basically, these computers do the ordinary arithmetic operations such as addition, subtraction, multiplication and division.

General-purpose computers are much more powerful because they can accept new sets of instructions. The smallest fully functional computers are called laptop computers. Most of the general-purpose computers known as personal or desktop computers can perform almost 5 million operations per second.
Today's personal computers are known to be used for different purposes: for testing new theories or models that cannot be examined with experiments, as valuable educational tools due to various encyclopedias, dictionaries, educational programmes, in book-keeping, accounting and management. Proper application of computing equipment in different industries is likely to result in proper management, effective distribution of materials and resources, more efficient production and trade.

Minicomputers are high-speed computers that have greater data manipulating capabilities than personal computers do and that can be used simultaneously by many users. These machines are primarily used by larger businesses or by large research and university centers. The speed and power of supercomputers, the highest class of computers, are almost beyond comprehension, and their capabilities are continually being improved. The most complex of these machines can perform nearly 32 billion calculations per second and store 1 billion characters in memory at one time, and can do in one hour what a desktop computer would take 40 years to do. They are used commonly by government agencies and large research centers. Linking together


330


networks of several small computer centers and programming them to use a common language has enabled engineers to create the supercomputer. The aim of this technology is to elaborate a machine that could perform a trillion calculations per second.
Digital computers
There are two fundamentally different types of computers: analog and digital. The former type solver problems by using continuously changing data such as voltage. In current usage, the term "computer" usually refers to high-speed digital computers. These computers are playing an increasing role in all branches of the economy.
Digital computers based on manipulating discrete binary digits (1s and 0s). They are generally more effective than analog computers for four principal reasons: they are faster; they are not so susceptible to signal interference; they can transfer huge data bases more accurately; and their coded binary data are easier to store and retrieve than the analog signals.
For all their apparent complexity, digital computers are considered to be simple machines. Digital computers are able to recognize only two states in each of its millions of switches, "on" or "off", or high voltage or low voltage. By assigning binary numbers to there states, 1 for "on" and 0 for "off", and linking many switches together, a computer can represent any type of data from numbers to letters and musical notes. It is this process of recognizing signals that is known as digitization. The real power of a computer depends on the speed with which it checks switches per second. The more switches a computer checks in each cycle, the more data it can recognize at one time and the faster it can operate, each switch being called a binary digit or bit.
A digital computer is a complex system of four functionally different elements:

  1. the central processing unit (CPU), 2) input devices, 3) memory-storage devices called disk drives, 4) output devices. These physical parts and all their physical components are called hardware.

The power of computers greatly on the characteristics of memory-storage devices. Most digital computers store data both internally, in what is called main memory, and externally, on auxiliary storage units. As a computer processes data and instructions, it temporarily stores information internally on special memory microchips. Auxiliary storage units supplement the main memory when programmes are too large and they also offer a more reliable method for storing data. There exist different kinds of auxiliary storage devices, removable magnetic disks being the most widely used. They can store up to 100 megabytes of data on one disk, a byte being known as the basic unit of data storage.


Output devices let the user see the results of the computer's data processing. Being the most commonly used output device, the monitor accepts video signals from a computer and shows different kinds of information such as text, formulas and graphics on its screen. With the help of various printers information stored in one of the computer's memory systems can be easily printed on paper in a desired number of

331


copies.
Programmes, also called software, are detailed sequences of instructions that direct the computer hardware to perform useful operations. Due to a computer's operating system hardware and software systems can work simultaneously. An operating system consists of a number of programmes coordinating operations, translating the data from different input and output devices, regulating data storage in memory, transferring tasks to different processors, and providing functions that help programmers to write software. In large corporations software is often written by groups of experienced programmers, each person focusing on a specific aspect of the total project. For this reason, scientific and industrial software sometimes costs much more than do the computers on which the programmes run.
The first hackers


  1. The first "hackers" were students at the Massachusetts Institute of Technology (MIT) who belonged to the TMRC (Tech Model Railroad Club). Some of the members really built model trains. But many were more interested in the wires and circuits underneath the track platform. Spending hours at TMRC creating better circuitry was called "a mere hack." Those members who were interested in creating innovative, stylistic, and technically clever circuits called themselves (with pride) hackers.




  1. During the spring of 1959, a new course was offered at MIT, a freshman programming class. Soon the hackers of the railroad club were spending days, hours, and nights hacking away at their computer, an IBM 704. Instead of creating a better circuit, their hack became creating faster, more efficient program - with the least

number of lines of code. Eventually they formed a group and created the first set of hacker's rules, called the Hacker's Ethic.




  1. Steven Levy, in his book Hackers, presented the rules:

Rule 1: Access to computers - and anything, which might teach you, something about the way the world works - should be unlimited and total.
Rule 2: All information should be free.

Rule 3: Mistrust authority - promote decentralization.

Rule 4: Hackers should be judged by their hacking, not bogus criteria such as degrees, race, or position.
Rule 5: You can create art and beauty on a computer.

Rule 6: Computers can change your life for the better.



  1. These rules made programming at MIT's Artificial Intelligence Laboratory a challenging, all encompassing endeavor. Just for the exhilaration of programming, students in the Al Lab would write a new program to perform even the smallest tasks. The program would be made available to others who would try to perform the same task with fewer instructions. The act of making the computer work more elegantly was, to a bonafide hacker, awe-inspiring.

  2. Hackers were given free reign on the computer by two AI Lab professors, "Uncle" John McCarthy and Marvin Minsky, who realized that hacking created new insights. Over the years, the AI Lab created many innovations: LIFE, a game about

332


survival; LISP, a new kind of programming language; the first computer chess game; The CAVE, the first computer adventure; and SPACEWAR, the first video game.
Computer crimes
More and more, the operations of our businesses, governments, and financial institutions are controlled by information that exists only inside computer memories. Anyone clever enough to modify this information for his own purposes can reap substantial re wards. Even worse, a number of people who have done this and been caught at it have managed to get away without punishment.
These facts have not been lost on criminals or would-be criminals. A recent Stanford Research Institute study of computer abuse was based on 160 case histories, which probably are just the proverbial tip of the iceberg. After all, we only know about the unsuccessful crimes. How many successful ones have gone undetected is anybody's guess.
Here are a few areas in which computer criminals have found the pickings all too easy.
Banking. All but the smallest banks now keep their accounts on computer files. Someone who knows how to change the numbers in the files can transfer funds at will. For instance, one programmer was caught having the computer transfer funds from other people's accounts to his wife's checking account. Often, tradition ally trained auditors don't know enough about the workings of computers to catch what is taking place right under their noses.

Business. A company that uses computers extensively offers many opportunities to both dishonest employees and clever outsiders. For instance, a thief can have the computer ship the company's products to addresses of his own choosing. Or he can have it issue checks to him or his confederates for imaginary supplies or ser vices. People have been caught doing both.


Credit Cards. There is a trend toward using cards similar to credit cards to gain access to funds through cash-dispensing terminals. Yet, in the past, organized crime has used stolen or counterfeit credit cards to finance its operations. Banks that offer after-hours or remote banking through cash-dispensing terminals may find themselves unwillingly subsidizing organized crime.
Theft of Information. Much personal information about individuals is now stored in computer files. An unauthorized person with access to this information could use it for blackmail. Also, confidential information about a company's products or operations can be stolen and sold to unscrupulous competitors. (One attempt at the latter came to light when the competitor turned out to be scrupulous and turned in the people who were trying to sell him stolen information.)

Software Theft. The software for a computer system is often more expensive than the hardware. Yet this expensive software is all too easy to copy. Crooked computer experts have devised a variety of tricks for getting these expensive programs printed out, punched on cards, recorded on tape, or otherwise delivered into their hands. This crime has even been perpetrated from remote terminals that access the computer over the telephone.


333


Theft of Time-Sharing Services. When the public is given access to a system, some members of the public often discover how to use the system in unauthorized ways. For example, there are the "phone freakers" who avoid long distance telephone charges by sending over their phones control signals that are identical to those used by the telephone company.
Since time-sharing systems often are accessible to anyone who dials the right telephone number, they are subject to the same kinds of manipulation.
Of course, most systems use account numbers and passwords to restrict access to authorized users. But unauthorized persons have proved to be adept at obtaining this information and using it for their own benefit. For instance, when a police computer system was demonstrated to a school class, a precocious student noted the access codes being used; later, all the student's teachers turned up on a list of wanted criminals.

Perfect Crimes. It's easy for computer crimes to go undetected if no one checks up on what the computer is doing. But even if the crime is detected, the criminal may walk away not only unpunished but with a glowing recommendation from his former employers.


Of course, we have no statistics on crimes that go undetected. But it's unsettling to note how many of the crimes we do know about were detected by accident, not by systematic audits or other security procedures. The computer criminals who have been caught may have been the victims of uncommonly bad luck.
For example, a certain keypunch operator complained of having to stay overtime to punch extra cards. Investigation revealed that the extra cards she was being asked to punch were for fraudulent transactions. In another case, disgruntled employees of the thief tipped off the company that was being robbed. An undercover narcotics agent stumbled on still another case. An employee was selling the company's merchandise on the side and using the computer to get it shipped to the buyers. While negotiating for LSD, the narcotics agent was offered a good deal on a stereo!
Unlike other embezzlers, who must leave the country, commit suicide, or go to jail, computer criminals sometimes brazen it out, demanding not only that they not be prosecuted but also that they be given good recommendations and perhaps other benefits, such as severance pay. All too often, their demands have been met.
Why? Because company executives are afraid of the bad publicity that would result if the public found out that their computer had been misused. They cringe at the thought of a criminal boasting in open court of how he juggled the most confidential records right under the noses of the company's executives, accountants, and security staff. And so another computer criminal departs with just the recommendations he needs to continue his exploits elsewhere.
Biologically Inspired
Damaging even a single binary digit is enough to shut your computer down. According to computer scientist Peter Bentley, if your car was as brittle as the conventional computer, then every chipped windscreen or wheel scrape would take

334


your car off the road. He is part of a group developing biologically inspired technologies at UCL. They have developed a self-repairing computer, which can instantly recover from crashes by fixing corrupted data.
Bentley started from scratch. He says, ‘if we want a computer to behave like a natural organism, then what would the architecture of that computer look like? I spent several years trying to make the concept as simple as possible.’ He designed a simulation with its own calculus, graph notation, programming language and compiler. His PhD students worked on improvements and developed software and biological models that show it really can survive damage. He continues, ‘we can corrupt up to a third of a program and the computer can regenerate its code, repairing itself and making itself work again.’
Systemic Architecture
A centralized architecture will fail as soon as one component fails. Our brains lose neurons every day but we're fine because the brain can reconfigure itself to make use of what is left. The systemic computer does the same thing. The systemic computer uses a pool of systems where its equivalent of instructions may be duplicated several times.
With the traditional computer if you wanted to add numbers together it would have a program with a single add instruction. In a systemic computer it might have several ‘adds’ floating about, any of which might be used to perform that calculation. It's the combination of multiple copies of instructions and data and decentralization, plus randomness that enables the systemic computer to be robust against damage and repair its own code.
New Programming Concept
Bentley’s team is working to improve the programming language further, and to create software that will allow the computer to learn and adapt to new data. He says they are constantly looking for better hardware on which to implement the computer and would love to collaborate with industry and develop a version of this new kind of computer for everyone.
Algorithm - how do I feel?
Matt Dobson

As we increasingly depend on digital technology for every aspect of our lives, a new smartphones app offers a window on our moods and emotions


Spike Jonze’s much-discussed movie ‘Her’, explores our emotional relationship with our virtual helpers in the future, our interfaces with the many different online activities we will depend on. In the future perhaps these new interfaces may also help us understand ourselves a little better, like the forthcoming app from the Cambridge-based ei Technologies – ei stands for ‘emotionally intelligent’.

335


The company is developing an app that will be able to identify peoples’ moods from smartphone conversations, via the acoustics rather than the content of a conversation. Such a technology has obvious commercial usages in a world where we interact with computer voices for services such as banking. ‘In call centres,’ says CEO Matt Dobson, ‘it’s about understanding how satisfied my customers are. As a consumer you have a perception and that is driven by a modulation and tone in their voice.’


Достарыңызбен бөлісу:
1   ...   10   11   12   13   14   15   16   17   ...   87




©engime.org 2024
әкімшілігінің қараңыз

    Басты бет