Сборник текстов на казахском, русском, английском языках для формирования навыков по видам речевой деятельности обучающихся уровней среднего образования



бет7/65
Дата05.11.2016
өлшемі17,36 Mb.
#946
1   2   3   4   5   6   7   8   9   10   ...   65
Parts of computer
Listen and name the part of a computer..

Alright, this computer part is essential for connecting to the internet. It will be connected to a modem or maybe to your telephone jack or to the same place where maybe your cable tv is connected and you hook it in to the back of your computer.

This is a pretty small computer part. It's not too expensive but not so many people have them actually. It's not rare but not everyone has them. And what it does is it allow you to other people when you are chatting online.

You might be too young to know what this is because people hardly use these at all anymore but they were used for storing memory. But they have basically gone out of style because they can't store nearly as much memory as some newer memory storage devices.

If your computer didn't have this, it wouldn't work. Your computer definitely needs to have this. I don't know how it works, it looks really complicated but I'm glad it works.

This part is also absolutely necessary for your computer to work but it's nearly nearly as complicated as the last question. You plug it in to the wall and it provides the electricity to power your computer.

Now the last two questions I said were necessary for your computer to function but without this one there would never be computers at all. There would be no one to invent them and there would be no one to use them. This is the reason why we have computers.

Computers and Ears
We're listening to electromagnetic signals from outer space that have been picked up by radio telescope and translated into frequencies that we can hear. I'm Jim Metzner, and this is the Pulse of the Planet, presented by DuPont. The cornucopia of signals in space includes bursts of energy from distant stars and planets mixed together with signals from our own planet, such as radio waves and radar. Well, trying to make sense of it all involves an increasing interdependence between humans and computers. 

"We don't actually listen to the cosmos with earphones. The reason is that the computers are much better at detecting weak signals than we are. They do it the same way that the human ear does it, but their quote senses unquote, their senses are much better." 

Kent Cullers is a physicist with the SETI Institute. SETI stands for the Search for Extra-Terrestrial Intelligence. 

"So the computers do the analysis. We listen to the radio equipment because it tells us whether in general the systems are behaving well. And from time to time we enhance what the computers do to make sure that in the end, what is supposed to correlate with our senses actually does. You need a direct perceptual link with the science that you do or in fact, you never quite believe it. The data is too abstract. So, yes, sound is useful for reality contact, but computers make billions of tests per second. No human being can possibly do that. I design the equipment that look for weak signals from the stars and I design the methods for weeding out the rather stronger signals that come from the earth. Within a century we will have searched the galaxy. But the only way that is possible is through the power of the growth of the computers." 

Pulse of the Planet is presented by DuPont, bringing you the miracles of science, with additional support provided by the National Science Foundation. I'm Jim Metzner. 

Disasters Social Media



The sounds of an earthquake in Japan, posted online, shortly after the event. Social media is changing the way that we respond to a natural disaster. I'm Jim

Metzner and this it the Pulse of the Planet Zobel: I don't know that we want to base our systems on the use of social media, but I think it's a very important tool to add more information to the picture.

Chris Zobel is a Professor of Business Information Technology at Virginia Tech. He helps municipalities and relief organizations to plan for disasters.

Zobel: One of the issues with social media is that it's much harder to establish truthfulness of what somebody is saying. And so, you don't necessarily want to put as much belief in Tweets coming from people you've never heard of before as you would from someone who's a fireman who happens to be on the scene. 

Zobel: There are a number of good examples of emergent groups where people have come together in response to, for example, the disaster in Haiti. There's a group called Crisis Mappers that a bunch of people who are very good with computers and very good with maps got together and built a new piece of software to be able to identify in Port-au-Prince exactly where the damage was that occurred. And enabled them to provide a way for people who are in Port-au-Prince who may be buried under rubble to send a tweet saying, "I'm here." The group back in the United States could then collect that information and then pass it along to people who were actually in-country, so that they then could mobilize the resources to go find those people.


3D Printing - Quadcopter



You've seen remote controlled copters. Here's one with a difference it was made on a 3D printer. I'm Jim Metzner and this is the Pulse of the Planet.
Buss: So, this a remote-controlled quadcopter. It flies with four motors and a control board in the center. It uses propellers that are nine inches long to create thrust. This one here's set up with a camera and a video transmitter to fly remotely. 
That''s Cam Buss, a student at Blacksburg High School, and an intern at Virginia Tech's DREAMS lab - where they do a lot of 3D Printing. Using layer upon layer of polymer plastic, 3D printers can manufacture just about anything you can dream up including helicopters. This quadcopter was designed by Cam. Except for the electronics, the entire folding structure was made on a 3D printer.

Buss: So I used a computer automated design, so it was completely on the computer, and I modeled each part and then did a stress test. After three designs, I finally accomplished it, and it's fully 3D-printed and folds up into a circular tube.

Williams: The entire design process was done digitally.

Chris Williams is Director of the Design, Research, and Education for Additive Manufacturing Systems or DREAMS Laboratory at Virginia Tech.

Williams: So, when Cam mentions that he did a stress test, it means he did that virtually, using analysis software. And then he did the assembly, meaning the integration of parts is all done digitally, and then, because the input to the 3D printer is the digital file itself, and that entire process is automated. So, Cam put the part into the printer and left it to print by itself overnight. So you can come in the next day, and the print is complete.
Computational Psychiatry - A Periodic Table of the Mind

Might it be possible to create something like a periodic table for the human mind? I'm Jim Metzner and this is the Pulse of the Planet.

Montague: When you're sitting here looking me in the eye and measuring me up, what exactly do you see happening in me? And the answer is almost nothing. I'm just sitting here, but a whole lot of stuff is going on inside of my head. And so, one of the things I focus on are these quiet, silent operations that go on during social interactions.

Read Montague is director of the Human Neuroimaging Lab and Computational Psychiatry Unit at the Virginia Tech Carilion Research Institute. He and his colleagues are trying find new ways of understanding the way we think and it has a lot to do with how we interact with others.

Montague: We have had to invent a series of new kinds of technologies for studying active social interactions. So,we've designed a way to link brain-scanning devices up, set people into staged social interactions, and eavesdrop on both the interacting minds.

So, how do I turn your feelings into numbers in a way that's useful for me to understand healthy human cognition and the way it breaks down in disease and injury? That's the question we're after... So, we take pairs of people, up to 20 - 25 people at a time. We construct staged experiments where they're doing trade back and forth. They make gestures to one another over favors, over simple things like this. You try to guess what the other person's thinking, so to speak something that you do every day of the week in normal context. 

And our hope is that we can use these staged interactions to take the whole of your cognition, chop it into a bunch of little pieces, and then, use those pieces reassembled to make a model of how it is that humans navigate their way around the world and think about other people. Using that model, you could then characterize how particular people are different. We call that computational psychiatry. It's a very new area. The analogy is with the periodic table of the elements.

Network systems
A=Agatha; K=Katharina

A: Hi, Katharina. It’s good to see you again. How are you?

K: I’m fine. And you?

A: Fine, thanks.

K: I’m really glad to hear about your success.

A: Thank you.

K: So how can I help you?

A: I wanted to see you because I need your advice. We think we should offer our products and services online to increase our market share. What do you think?

K: That’s a great idea. You should definitely do that.

A: Good. So what exactly should I do?

K: I’d recommend that you set up an E-commerce flower shop.
A: OK.

K: I’ll send you an e-mail with some recommendations.

A: Oh, thank you very much. We ought to be ready for Mother’s Day.

K: in that case, I’d suggest we start right away. Let me ask you some questions…

B = Boris; A = Ahsan

B: I have a problem with the network download speed. What can y o u suggest?

A: Why don't you change the hub?

B: I don't think that will work. The hub is fine.

A: OK. How about adding a repeater then?

B: Hmm, I 'm not sure it will help. It's not a problem with the signal strength.

A: OK, then you should check the cables and network devices to make sure that they are compatible with your network.

B: What about changing t h e modem?

A: I don't think it's necessary. I think it's a problem with the bridge, switch or the router. You

should look at t h e specifications.

B: OK, I will. Thanks for y o u r help.

A: Why don't you check user recommendations on the internet as well?

B: Good idea. I'll do that.
IT security and safety
L=Ludek; A=Ales

L: Ales, can y o u check my laptop? Nothing seems to work.

A: Hmm, what have y o u done this time? Wow! Your laptop is a mess.

L: Sorry about that. I'll clean it up.

A: Have you updated your antivirus software recently?

L: Yes, I have. I did it last week.

A: Well, that's good.

L: I'm afraid I may lose my project. I haven't backed it up.

A: Hmm. You might have spyware or some other malware on your computer. You should install a good spyware doctor program. A n antivirus program may not catch everything.

L: OK, I'll do that.

A: And w h y don't you protect your WLAN access with a password? It's likely you will attract hackers and piggybackers and then you might lose a lot of work.

L: Fine, I'll do that.

A: I'll scan y o u r system with my anti-spyware software now and see if there is a problem.

L: Thanks.


H = Helpdesk technician; T = Tuka

H: Hello, Aqhel speaking. How can I help you?

T: Hi, my name's Tuka. I've upgraded my computer to Windows 7 and now I can't find my personal files anywhere!

H: I see.

T: I've checked Windows 'help' and that didn't tell me anything. I need one file urgently.

H: I'm sure we can find your file. Don't worry.

T: Well, I hope so.

H: What Windows version did y o u have before?

T: Before I had Windows Vista.

H: OK. Is y o u r computer on?

T: Yes, it is.

H: Good. Find Windows.old folder in your C drive.

T: I don't understand. How? I can't see it in Windows Explorer.

H: Please go to the search box, write Windows.old and click enter.

T: OK.

H: The Windows.old folder contains different folders. Your folders and files are in



Documents and Settings. You should find the files there.

T: I'll do that.

H: I'll come down to your office if you still have a problem. Good luck.

T: Thanks.


Google Reveals The Computers Behind The Cloud
STEVE INSKEEP:

Next, we'll visit the Internet cloud. That's a term people often use to describe the place or places where we store stuff online. Increasingly, our databases, email accounts, other information are not stored in local computers, but in giant low-rise warehouses packed with computer servers. They're all over the country, all around the world and they are huge consumers of energy.

Google has around 20 of these data centers, and recently allowed technology writer Steven Levy into one of them in North Carolina to show off how energy-efficient Google is trying to be.

It gave Levy a glimpse into the online world as people rarely get to see it. And he writes about his experience in the new issue of Wired magazine.

STEVEN LEVY:

What strikes you immediately is the scale of things. The room is so huge you can almost see the curvature of Earth in the end. And it's wall to wall, are racks and racks and racks of servers with blinking blue lights and each one is many, many times more powerful and with more capacity than my laptop. And you're in the throbbing heart of the Internet. And you really feel it.

INSKEEP:

So you're in this data center. It's using energy quite efficiently compared to the average data center, even a pretty good one. What are some of the techniques that are used and what do they look like?

LEVY:

Well, one technique that Google really pioneered was, you know, keeping things hotter than has been traditionally expected in a data center. In old data centers, you would put on a sweater before you went in there. Google felt that you could run the general facility somewhat warmer than even normal room temperature. When I walked into Lenoir, I think it was 77 degrees.



INSKEEP:

And that doesn't run down the computing equipment?

LEVY:

Computer equipment is actually tougher than people expect. And they isolate the really hot air that comes out from the back of the servers into what's known as a hot aisle, and that's sealed off and it's maybe 120 degrees, and that's where they take that very hot air and do the water cooling.



Google takes a look at the geography and the resources every time they build a data center. So in North Carolina, they did something that was relatively traditional. They have these coolers where the water circulating goes outside and cools down before it reenters the data center to cool down the servers. But in Finland, which I did visit, they use seawater to cool the data center.

INSKEEP:


Now this is a huge issue because computers generate so much heat that keeping it cool would be a tremendous use of energy, a tremendous waste of energy in the view of some.

LEVY:


There's no way around it. These things burn a lot of energy, and a lot of the energy in a data center is done to cool it down so the computers don't melt. Data centers in general consume 1.5 percent, roughly, of all the world's electricity.

INSKEEP:


So as you're talking, I'm thinking about cloud computing, the Internet cloud. And many of us are getting used to this idea that if we have an email account, it might not be saved in the machine where we are at; it's going off somewhere. But once you actually got in to look at one of these places and hear it, feel it, did it change your perceptions of what's going on in the world when you went back to your computer screen at home?

LEVY:


It actually did. You know, many, many years ago I went on a journalistic quest for Einstein's brain, which was lost then. And I felt if I saw it, it might be an anticlimax. But when I actually did see it, it really opened up my eyes; it was a revelation. This is where, you know, the power of the atom came from and relativity and all those other things. And I had the same kind of experience inside that Google data center. Here was the ephemeral made real, you know, the cloud really was something and it was something quite remarkable and breathtaking.

INSKEEP:


Steven Levy, thanks very much.

LEVY:


Thank you.

INSKEEP:


He's a senior writer for Wired magazine.
Internet safety lessons for 5-year-olds
7th February, 2013

A British organization has recommended that children as young as five should be given instruction on the dangers of the Internet.

The U.K. Safer Internet Centre is co-funded by the European Commission and delivers a wide range of activities and initiatives to promote the safe and responsible use of technology.

Britain's National Society for the Prevention of Cruelty to Children (NSPCC) welcomed the advice and urged schools to provide appropriate guidance on Internet use.

The NSPCC's Claire Lilley warned of the dangers youngsters faced by being online. She said: "We are facing an e-safety time bomb. Young people tell us they are experiencing all sorts of new forms of abuse on a scale never seen before."

The Safer Internet Centre published an online survey of children's reflections on the Internet on February 5th, to coincide with the UK's Safer Internet Day.

The report summarizes the opinions of 24,000 schoolchildren. It found that 31% of seven to 11-year-olds said that gossip or mean comments online had stopped them from enjoying the Internet.

Children also said they had been exposed to online pornography, experienced cyber-bullying and had been forced into sending indecent images of themselves to others.

The report said: "Promoting a safer and better Internet for children…involves promoting their online rights - to be safe online, to report concerns and to manage their privacy."
Texts for reading

Information security

A hacker’s life
Have you ever locked yourself out of your home and had to try to break in? First, you get a sense of accomplishment in succeeding. But then comes the worrying realisation that if you can break into your own place as an amateur, a professional could do so five times faster. So you look at the weak point in your security and fix it. Well, that’s more or less how the DefCon hackers conference works.

Every year passionate hackers meet at DefCon in Las Vegas to present their knowledge and capabilities. Mention the word ‘hacker’ and many of us picture a seventeen-year-old geek sitting in their bedroom, illegally hacking into the US’s defence secrets in the Pentagon. Or we just think ‘criminals’. But that is actually a gross misrepresentation of what most hackers do.

The activities and experiments that take place at DefCon have an enormous impact on our daily lives. These are people who love the challenge of finding security gaps: computer addicts who can’t break the habit. They look with great scrutiny at all kinds of systems, from the Internet to mobile communications to household door locks. And then they try to hack them. In doing so, they are doing all of us a great service, because they pass on their findings to the industries that design these systems, which are then able to plug the security holes.

A graphic example of this is when I attended a presentation on electronic door locks. Ironically, one of the most secure locks they demonstrated was a 4,000-year-old Egyptian tumbler lock. But when it came to more modern devices, the presenters revealed significant weaknesses in several brands of electro-mechanical locks. A bio-lock that uses a fingerprint scan for entry was defeated, easily, by a paper clip. (Unfortunately, although all the manufacturers of the insecure locks were alerted, not all of them responded.)

DefCon is a vast mix of cultures as well as a culture in itself. People in dark clothes and ripped jeans talk to people in golf shirts and khakis. Social status here is based on knowledge and accomplishment, not on clothing labels or car marques. It’s kind of refreshing. There are government agents here, as well as video game enthusiasts. Not that people ask each other where they work – that would break the hackers’ etiquette.

In an attempt to attract the brightest hackers, DefCon runs a competition called Capture the Flag. Capture the Flag pits elite hackers against each other in a cyber game of network attack and defence that goes on 24 hours a day. In a large, dimly lit conference hall, small groups of hackers sit five metres from each other, intensely trying either to protect or to break into the system. There are huge video projections on the walls, pizza boxes and coffee cups are strewn everywhere. The room is mesmerising.

In another room, another contest is taking place. Here participants have five minutes to free themselves from handcuffs, escape from their ‘cell’, get past a guard, retrieve their passport from a locked filing cabinet, leave through another locked door, and make their escape to freedom.

If you’re someone who dismisses the DefCon attendees as a group of geeks and social misfits, then you probably have the same password for 90 per cent of your online existence. Which means you are doomed. Because even if you think you’re being clever by using your grandmother’s birth date backwards as a secure key, you’re no match for the people that I’ve met. There is no greater ignorance to be found online than that of an average internet user. I’m happy to admit that I’m one of them. I’m also aware that there are other people out there – big business among them – who are trying to get more and more access to the data of our personal online habits. Sadly, we have few tools to protect ourselves. But there is a group of people who are passionate about online freedom and have the means to help us protect our privacy. Many of them can be found at DefCon.


Computer
Computer software, or simply software, is that part of a computer system  that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built.

The term "software" was first proposed by Alan Turing and used in this sense by John W.Tukey in 1957. In computer science and software engineering, computer software is all information processed by computer systems, programs and data.

Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.

At the lowest level, executable code consists of machine language instructions specific to an individual processor—typically a central processing unit (CPU). A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also (indirectly) cause something to appear on a display of the computer system—a state change which should be visible to the user. The processor carries out the instructions in the order they are provided, unless it is instructed to ‘jump” to a different instruction, or interrupted.

The majority of software is written in high-level programming languages that are easier and more efficient for programmers, meaning closer to a natural language. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two. Software may also be written in a low-level assembly language, essentially, a vague mnemonic representation of a machine language using a natural language alphabet, which is translated into machine language using an assembler.
Computational Thinking--What and Why?

By Jeannette M. Wing

In March 2006 article for the Communications of the ACM, I used the term "computational thinking" to articulate a vision that everyone, not just those who major in computer science, can benefit from thinking like a computer scientist [Wing06]. So, what is computational thinking? Here's a definition that Jan Cuny of the National Science Foundation, Larry Snyder of the University of Washington, and I use; it was inspired by an email exchange I had with Al Aho of Columbia University:



Computational thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent.

Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution. The solution can be carried out by a human or machine, or more generally, by combinations of humans and machines.

My interpretation of the words "problem" and "solution" is broad. I mean not just mathematically well-defined problems whose solutions are completely analyzable, e.g., a proof, an algorithm, or a program, but also real-world problems whose solutions might be in the form of large, complex software systems. Thus, computational thinking overlaps with logical thinking and systems thinking. It includes algorithmic thinking and parallel thinking, which in turn engage other kinds of thought processes, such as compositional reasoning, pattern matching, procedural thinking, and recursive thinking. Computational thinking is used in the design and analysis of problems and their solutions, broadly interpreted.


The Value of Abstraction

The most important and high-level thought process in computational thinking is the abstraction process. Abstraction is used in defining patterns, generalizing from specific instances, and parameterization. It is used to let one object stand for many. It is used to capture essential properties common to a set of objects while hiding irrelevant distinctions among them. For example, an algorithm is an abstraction of a process that takes inputs, executes a sequence of steps, and produces outputs to satisfy a desired goal. An abstract data type defines an abstract set of values and operations for manipulating those values, hiding the actual representation of the values from the user of the abstract data type. Designing efficient algorithms inherently involves designing abstract data types.

Abstraction gives us the power to scale and deal with complexity. Applying abstraction recursively allows us to build larger and larger systems, with the base case (at least for computer science) being bits (0's and 1's). In computing, we routinely build systems in terms of layers of abstraction, allowing us to focus on one layer at a time and on the formal relations (e.g., "uses," "refines" or "implements," "simulates") between adjacent layers.  When we write a program in a high-level language, we're building on lower layers of abstractions. We don't worry about the details of the underlying hardware, the operating system, the file system, or the network; furthermore, we rely on the compiler to correctly implement the semantics of the language. The narrow-waist architecture of the Internet demonstrates the effectiveness and robustness of appropriately designed abstractions: the simple TCP/IP layer at the middle has enabled a multitude of unforeseen applications to proliferate at layers above, and a multitude of unforeseen platforms, communications media, and devices to proliferate at layers below.

Computational thinking draws on both mathematical thinking and engineering thinking. Unlike mathematics, however, our computing systems are constrained by the physics of the underlying information-processing agent and its operating environment. And so, we must worry about boundary conditions, failures, malicious agents, and the unpredictability of the real world. And unlike other engineering disciplines, in computing --thanks to software, our unique "secret weapon"--we can build virtual worlds that are unconstrained by physical realities. And so, in cyberspace our creativity is limited only by our imagination.


Computational Thinking and Other Disciplines
Computational thinking has already influenced the research agenda of all science and engineering disciplines. Starting decades ago with the use of computational modeling and simulation, through today's use of data mining and machine learning to analyze massive amounts of data, computation is recognized as the third pillar of science, along with theory and experimentation [PITAC 2005].

The expedited sequencing of the human genome through the "shotgun algorithm" awakened the interest of the biology community in computational methods, not just computational artifacts (such as computers and networks).  The volume and rate at which scientists and engineers are now collecting and producing data--through instruments, experiments and simulations--are demanding advances in data analytics, data storage and retrieval, as well as data visualization. The complexity of the multi-dimensional systems that scientists and engineers want to model and analyze requires new computational abstractions. 

These are just two reasons that every scientific directorate and office at the National Science Foundation participates in the Cyber-enabled Discovery and Innovation, or CDI, program, an initiative started four years ago with a fiscal year 2011 budget request of $100 million. CDI is in a nutshell "computational thinking for science and engineering."

Computational thinking has also begun to influence disciplines and professions beyond science and engineering. For example, areas of active study include algorithmic medicine, computational archaeology, computational economics, computational finance, computation and journalism, computational law, computational social science, and digital humanities. Data analytics is used in training Army recruits, detecting email spam and credit card fraud, recommending and ranking the quality of services, and even personalizing coupons at supermarket checkouts.

At Carnegie Mellon, computational thinking is everywhere. We have degree programs, minors, or tracks in "computational X" where X is applied mathematics, biology, chemistry, design, economics, finance, linguistics, mechanics, neuroscience, physics and statistical learning. We even have a course in computational photography. We have programs in computer music, and in computation, organizations and society. The structure of our School of Computer Science hints at some of the ways that computational thinking can be brought to bear on other disciplines. The Robotics Institute brings together computer science, electrical engineering, and mechanical engineering; the Language Technologies Institute, computer science and linguistics; the Human-Computer Interaction Institute, computer science, design, and psychology; the Machine Learning Department, computer science and statistics; the Institute for Software Research, computer science, public policy, and social science. The newest kid on the block, the Lane Center for Computational Biology, brings together computer science and biology. The Entertainment Technology Center is a joint effort of SCS and the School of Drama. SCS additionally offers joint programs in algorithms, combinatorics and optimization (computer science, mathematics, and business); computer science and fine arts; logic and computation (computer science and philosophy); and pure and applied logic (computer science, mathematics, and philosophy).

Computational Thinking in Daily Life
Can we apply computational thinking in daily life? Yes! These stories helpfully provided by Computer Science Department faculty demonstrate a few ways:

Pipelining: SCS Dean Randy Bryant was pondering how to make the diploma ceremony at commencement go faster. By careful placement of where individuals stood, he designed an efficient pipeline so that upon the reading of each graduate's name and honors by Assistant Dean Mark Stehlik, each person could receive his or her diploma, then get a handshake or hug from Mark, and then get his or her picture taken. This pipeline allowed a steady stream of students to march across the stage (though a pipeline stall occurred whenever the graduate's cap would topple while getting hug from Mark).

Seth Goldstein, associate professor of computer science, once remarked to me that most buffet lines could benefit from computational thinking: "Why do they always put the dressing before the salad? The sauce before the main dish? The silverware at the start? They need some pipeline theory."



Hashing: After giving a talk at a department meeting about computational thinking, Professor Danny Sleator told me about a hashing function his children use to store away Lego blocks at home. According to Danny, they hash on several different categories: rectangular thick blocks, other thick (non-rectangular) blocks, thins (of any shape), wedgies, axles, rivets and spacers, "fits on axle," ball and socket and "miscellaneous." They even have rules to classify pieces that could fit into more than one category. "Even though this is pretty crude, it saves about a factor of 10 when looking for a piece," Danny says. Professor Avrim Blum overheard my conversation with Danny and chimed in "At our home, we use a different hash function."

Sorting: The following story is taken verbatim from an email sent by Roger Dannenberg, associate research professor of computer science and professional trumpeter. "I showed up to a big band gig, and the band leader passed out books with maybe 200 unordered charts and a set list with about 40 titles we were supposed to get out and place in order, ready to play. Everyone else started searching through the stack, pulling out charts one-at-a-time. I decided to sort the 200 charts alphabetically O(N log(N)) and then pull the charts O(M log(N)). I was still sorting when other band members were halfway through their charts, and I started to get some funny looks, but in the end, I finished first. That's computational thinking."
Benefits of Computational Thinking
Computational thinking enables you to bend computation to your needs. It is becoming the new literacy of the 21st century. Why should everyone learn a little computational thinking? Cuny, Snyder and I advocate these benefits [CunySnyderWing10]:
Computational thinking for everyone means being able to:
Understand which aspects of a problem are amenable to computation,

  • Evaluate the match between computational tools and techniques and a problem,

  • Understand the limitations and power of computational tools and techniques,

  • Apply or adapt a computational tool or technique to a new use,

  • Recognize an opportunity to use computation in a new way, and

  • Apply computational strategies such divide and conquer in any domain.

Computational thinking for scientists, engineers, and other professionals further means being able to:

  • Apply new computational methods to their problems,

  • Reformulate problems to be amenable to computational strategies,

  • Discover new science through analysis of large data,

  • Ask new questions that were not thought of or dared to ask because of scale, but which are easily addressed computationally, and

  • Explain problems and solutions in computational terms.


Computational Thinking in Education
Campuses throughout the United States and abroad are revisiting their undergraduate curriculum in computer science. Many are changing their first course in computer science to cover fundamental principles and concepts, not just programming. For example, at Carnegie Mellon we recently revised our undergraduate first-year courses to promote computational thinking for non-majors Moreover, the interest and excitement surrounding computational thinking has grown beyond undergraduate education to additional recent projects, many focused on incorporating computational thinking into kindergarten through 12th grade education. Sponsors include professional organizations, government, academia and industry.

The College Board, with support from NSF, is designing a new Advanced Placement (AP) course that covers the fundamental concepts of computing and computational thinking (see the website at www.csprinciples.org). Five universities are piloting versions of this course this year: University of North Carolina at Charlotte, University of California at Berkeley, Metropolitan State College of Denver, University of California at San Diego and University of Washington. The plan is for more schools--high schools, community colleges and universities--to participate next year.

Computer science is also getting attention from elected officials. In May 2009, computer science thought leaders held an event on Capitol Hill to call on policymakers to put the "C" in STEM, that is, to make sure that computer science is included in all federally-funded educational programs that focus on science, technology, engineering and mathematics (STEM) fields. The event was sponsored by ACM, CRA, CSTA, IEEE, Microsoft, NCWIT, NSF, and SWE . 

The U.S. House of Representatives has now designated the first week of December as Computer Science Education Week (www.csedweek.org); the event is sponsored by ABI, ACM, BHEF, CRA, CSTA, Dot Diva, Google, Globaloria, Intel, Microsoft, NCWIT, NSF, SAS, and Upsilon Pi Epsilon. In July 2010, U.S. Rep. Jared Polis (D-CO) introduced the Computer Science Education Act (H.R. 5929) in an attempt to boost K-12 computer science education efforts.

Another boost is expected to come from the NSF's Computing Education for the 21st Century (CE21) program, started in September 2010 and designed to help K-12 students, as well as first- and second-year college students, and their teachers develop computational thinking competencies. CE21 builds on the successes of the two NSF programs, CISE Pathways to Revitalized Undergraduate Computing Education (CPATH) and Broadening Participating in Computing (BPC). CE21 has a special emphasis on activities that support the CS 10K Project, an initiative launched by NSF through BPC.  CS 10K aims to catalyze a revision of high school curriculum, with the proposed new AP course as a centerpiece, and to prepare 10,000 teachers to teach the new courses in 10,000 high schools by 2015.

Industry has also helped promote the vision of computing for all.  Since 2006, with help from Google and later Microsoft, Carnegie Mellon has held summer workshops for high school teachers called "CS4HS." Those workshops are designed to deliver the message that there is more to computer science than computer programming. CS4HS spread in 2007 to UCLA and the University of Washington. By 2010, under the auspices of Google, CS4HS had spread to 20 schools in the United States and 14 in Europe, the Middle East and Africa. Also at Carnegie Mellon, Microsoft Research funds the Center for Computational Thinking (www.cs.cmu.edu/~CompThink/), which supports both research and educational outreach projects.

Computational thinking has also spread internationally. In August 2010, the Royal Society--the U.K.'s equivalent of the U.S.'s National Academy of Sciences--announced that it is leading an 18-month project to look "at the way that computing is taught in schools, with support from 24 organizations from across the computing community including learned societies, professional bodies, universities and industry." (See www.royalsociety.org/education-policy/projects/.) One organization that has already taken up the challenge in the U.K. is called Computing At School, a coalition run by the British Computing Society and supported by Microsoft Research and other industry partners.
Resources Abound
The growing worldwide focus on computational thinking means that resources are becoming available for educators, parents, students and everyone else interested in the topic. 

In October 2010, Google launched the Exploring Computational Thinking website (www.google.com/edu/computational-thinking), which has a wealth of links to further web resources, including lesson plans for K-12 teachers in science and mathematics. 

Computer Science Unplugged (www.csunplugged.org), created by Tim Bell, Mike Fellows and Ian Witten, teaches computer science without the use of a computer. It is especially appropriate for elementary and middle school children. Several dozen people working in many countries, including New Zealand, Sweden, Australia, China, Korea, Taiwan and Canada, as well as in the United States, contribute to this extremely popular website.

The National Academies' Computer Science and Telecommunications Board held a series of workshops on "Computational Thinking for Everyone" with a focus on identifying the fundamental concepts of computer science that can be taught to K-12 students. The first workshop report [NRC10] provides multiple perspectives on computational thinking.

Additionally, panels and discussions on computational thinking have been plentiful at venues such as the annual ACM Special Interest Group on Computer Science Education (SIGCSE) symposium and the ACM Educational Council. The education committee of the CRA presented a white paper [CRA-E10] at the July 2010 CRA Snowbird conference, which includes recommendations for computational thinking courses for non-majors. CSTA produced and distributes "Computational Thinking Resource Set: A Problem-Solving Tool for Every Classroom." It's available for download at the CSTA's.
Final Remarks--and a Challenge
Computational thinking is not just or all about computer science. The educational benefits of being able to think computationally--starting with the use of abstractions--enhance and reinforce intellectual skills, and thus can be transferred to any domain.

Computer scientists already know the value of thinking abstractly, thinking at multiple levels of abstraction, abstracting to manage complexity, abstracting to scale up, etc. Our immediate task ahead is to better explain to non-computer scientists what we mean by computational thinking and the benefits of being able to think computationally. Please join me in helping to spread the word!


Jeannette Wing is head of the Computer Science Department at Carnegie Mellon University and the President's Professor of Computer Science. She earned her bachelor's, master's and doctoral degrees at the Massachusetts Institute of Technology and has been a member of the Carnegie Mellon faculty since 1985. 
From 2007 to 2010, Wing served as assistant director for the Computer and Information Science and Engineering Directorate of the National Science Foundation. She is a fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the Association for Computing Machinery and the Institute of Electrical and Electronic Engineers.
Navigating, Learning and Capturing the Latent Sematic Pathways in an Email Corpus
E-mail, while originally designed for asynchronous communication, now serves a host of other overloaded purposes including task management, informal rolodexing and archival storage. Many users suffer from excessive email and attempt to alleviate the problem with a personal categorization or foldering scheme. However, given the sheer volume of email received, manual categorization does not serve as a viable solution. Any attempt to redesign email communication to better suit its current tasks will be in tension with the legacy epistemology that a user has of her Inbox. I propose a system that will enable multi-dimensional categorization, two example dimensions being social networks and action items. The system attempts to discover latent semantic structures within a user's corpus and uses it to perform email categorization. A user's social network is an example of an underlying semantic structure in an email corpus. The unsupervised message classification scheme developed is based on discovering this social network structure. The system extracts and analyzes email header information contained within the user corpora and uses it to create a variety of graph based social network models. An edge-betweeness centrality algorithm is then applied in conjunction with a ranking scheme to create a set of participant clusters and corresponding message clusters. Having an explicit mapping between a participant and message cluster allows the user to mold the system to fit in with the legacy epistemology and to train it for further use. In addition to this, the system can evolve with time and adapt to new semantic structures. Initial results for the classification scheme are highly encouraging. Novel methods of navigating through an email corpus are also explored. Latent semantic indexing and other similarity measures are used as the basis for an interactive system that will allow the user to extract underlying semantic structure from a corpus and capture it for later use.
Design
The emergence of low-cost fabrication technology (most notably 3D printing) has brought us a dawn of making, promising to empower everyday users with the ability to fabricate physical objects of their own design. However, the technology itself is innately oblivious of the physical world—things are, in most cases, assumed to be printed from scratch in isolation from the real world objects they will be attached to and function with. To bridge this ‘gulf of fabrication', my thesis research focuses on developing fabrication techniques with tool integration to enable users expressively create designs that can be attached to and function with existing real world objects. Specifically, my work explores techniques that leverage the 3D printing process to create attachments directly over, onto and around existing objects; a design tool further enables people to specify and generate adaptations that can be attached to and mechanically transform existing objects in user-customized ways; a mixed-initiative approach allows people to create functionally valid design, which addresses real world relationships with other objects; finally, by situating the fabrication environment in the real world, a suite of virtual tools would allow users to design, make, assemble, install and test physical objects in situ directly within the context of their usage. Overall my thesis attains to make fabrication real—innovation in design tools harnesses fabrication technology, enabling things to be made by real people, to address real usage and to function with real objects in the world.
Databases: Their Creation, Management and Utilization
Information systems are the software and hardware systems that support data-intensive applications. The journal Information Systems publishes articles concerning the design and implementation of languages, data models, process models, algorithms, software and hardware for information systems. Subject areas include data management issues as presented in the principal international database conferences (e.g. ACM SIGMOD, ACM PODS, VLDB, ICDE and ICDT/EDBT) as well as data-related issues from the fields of data mining, information retrieval, internet and cloud data management, business process management, web semantics, visual and audio information systems, scientific computing, and organizational behaviour. Implementation papers having to do with massively parallel data management, fault tolerance in practice, and special purpose hardware for data-intensive systems are also welcome.

All papers should motivate the problems they address with compelling examples from real or potential applications. Systems papers must be serious about experimentation either on real systems or simulations based on traces from real systems. Papers from industrial organisations are welcome.

Theoretical papers should have a clear motivation from applications. They should either break significant new ground or unify and extend existing algorithms. Such papers should clearly state which ideas have potentially wide applicability.

In addition to publishing submitted articles, the Editors-in-Chief will invite retrospective articles that describe significant projects by the principal architects of those projects. Authors of such articles should write in the first person, tracing the social as well as technical history of their projects, describing the evolution of ideas, mistakes made, and reality tests. 

Technical results should be explained in a uniform notation with the emphasis on clarity and on ideas that may have applications outside of the environment of that research. Particularly complex details may be summarized with references to previously published papers.

We will make every effort to allow authors the right to republish papers appearing in Information Systems in their own books and monographs.

Editors-in-Chief: 

Dennis Shasha  Gottfried Vossen

Hide full Aims & Scope
Design
Today, creating an academic website goes hand-in-hand with creating your CV and presenting who you are to your academic and professional peers. Creating and maintaining your website is an essential tool in disseminating your research and publications. Use your academic personal website to highlight your personality, profile, research findings, publications, achievements, affiliations and more. In addition, by using some of the many social media tools available, you can further amplify the information contained in your website.

An academic personal website takes you a step further in terms of increasing your visibility because it is an ideal place to showcase your complete research profile. You will attract attention to your publications, your name recognition will increase and you will get cited more. Moreover, a website is also useful for networking and collaborating with others, as well as for job searching and application.


Data storage
Online storage is an emerging method of data storage and back-up. A remote server with a network connection and special software backs up files, folders, or the entire contents of a hard drive. There are many companies that provide a web-based backup.

One offsite technology in t h is area is loud computing. This allows colleagues in an organization to share resources, software and information over the Internet.

Continuous backup and storage on a remote hard drive eliminates the risk of data loss as a result of fire, flood or theft. Remote data storage and back-up providers encrypt the data and set up password protection to ensure maximum security.

Small businesses and individuals choose to save data in a more traditional way. External drives, disks and magnetic tapes are very popular data data storage solutions. USB or flash methods are very practical with small volumes of data storage and backup. However, they are not very reliable and do not protect the user in case of a disaster.


Types of network
Dear Agatha

Following our meeting last week, please find my recommendations for your business. I think you should set up a LAN, or Local Area Network, and a WAN, or Wide Area Network, for your needs. A LAN connects devices over a small area, for example your apartment and the shop. In addition, you should connect office equipment, such as the printer, scanner and fax machine, to your LAN because you can then share these devices between users. I'd recommend that we connect the LAN to a WAN so you can link to the Internet and sell your products. In addition I'd recommend we set up a Virtual Private Network so that you have a remote access to your company's LAN, when you travel.

VPN is a private network that uses a public network, usually the Internet, to connect remote sites or users together.

Let's meet on Friday to discuss these recommendations.

Best regards

Katharina


The Digital Divide
A recent survey has shown that the number of people in the United Kingdom who do not intend to get internet access has risen. These people, who are known as 'net refuseniks', make up 44% of UK households, or 11.2 million people in total.

The research also showed that more than 70 percent of these people said that they were not interested in getting connected to the internet. This number has risen from just over 50% in 2005, with most giving lack of computer skills as a reason for not getting internet access, though some also said it was because of the cost.

More and more people are getting broadband and high speed net is available almost everywhere in the UK, but there are still a significant number of people who refuse to take the first step.

The cost of getting online is going down and internet speeds are increasing, so many see the main challenge to be explaining the relevance of the internet to this group. This would encourage them to get connected before they are left too far behind. The gap between those who have access to and use the internet is the digital divide, and if the gap continues to widen, those without access will get left behind and miss out on many opportunities, especially in their careers.


The First Computer Programmer
Ada Lovelace was the daughter of the poet Lord Byron. She was taught by Mary Somerville, a well-known researcher and scientific author, who introduced her to Charles Babbage in June 1833. Babbage was an English mathematician, who first had the idea for a programmable computer.

In 1842 and 1843, Ada translated the work of an Italian mathematician, Luigi Menabrea, on Babbage's Analytical Engine. Though mechanical, this machine was an important step in the history of computers; it was the design of a mechanical general-purpose computer. Babbage worked on it for many years until his death in 1871. However, because of financial, political, and legal issues, the engine was never built. The design of the machine was very modern; it anticipated the first completed general-purpose computers by about 100 years.

When Ada translated the article, she added a set of notes which specified in complete detail a method for calculating certain numbers with the Analytical Engine, which have since been recognized by historians as the world's first computer program. She also saw possibilities in it that Babbage hadn't: she realised that the machine could compose pieces of music. The computer programming language 'Ada', used in some aviation and military programs, is named after her.
Atom-sized transistor created by scientists











By David Derbyshire, Science Correspondent

Scientists have shrunk a transistor to the size of a single atom, bringing closer the day of microscopic electronic devices that will revolutionise computing, engineering and medicine.

Researchers at Cornell University, New York, and Harvard University, Boston, fashioned the two "nano-transistors" from purpose-made molecules. When voltage was applied, electrons flowed through a single atom in each molecule.

The ability to use individual atoms as components of electronic circuits marks a key breakthrough in nano-technology, the creation of machines at the smallest possible size.

Prof Paul McEuen, a physicist at Cornell, who reports the breakthrough in today's issue of Nature, said the single-atom transistor did not have all the functions of a conventional transistor such as the ability to amplify.

But it had potential use as a chemical sensor to any change in its environment.
Basic principles of information security
Key concepts. For over twenty years, information security has held confidentiality, integrity and availability (known as the CIA triad) to be the core principles of information security. There is continuous debate about extending this classic trio. Other principles such as Accountability have sometimes been proposed for addition. It has been pointed out that issues such as Non-Repudiation1 do not fit well within the three core concepts, and as regulation of computer systems has increased (particularly amongst the Western nations) Legality is becoming a key consideration for practical security installations in 1992. In 2002 the OECD's2 Guidelines for the Security of Information Systems and Networks proposed the nine generally accepted principles: Awareness, Responsibility, Response, Ethics, 21 Democracy, Risk Assessment, Security Design and Implementation, Security Management, and Reassessment. Based upon those, in 2004 the NIST's3 Engineering Principles for Information Technology Security proposed 33 principles. From each of these derived guidelines and practices in 2002, Donn Parker proposed an alternative model for the classic CIA4 triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility.

Confidentiality. Confidentiality is the term used to prevent the disclosure of information to unauthorized individuals or systems. For example, a credit card transaction on the Internet requires the credit card number to be transmitted from the buyer to the merchant and from the merchant to a transaction processing network. The system attempts to enforce confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in databases, log files5 , backups6 , printed receipts, and so on), and by restricting access to the places where it is stored. If an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred. Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at your computer screen while you have confidential data displayed on it could be a breach of confidentiality. If a laptop computer containing sensitive information about a company's employees is stolen or sold, it could result in a breach of confidentiality7 . Giving out confidential information over the telephone is a breach of confidentiality if the caller is not authorized to have the information. Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information a system holds.

Integrity. In information security, integrity means that data cannot be modified undetectably. This is not the same thing as referential integrity8 in databases, although it can be viewed as a special case of Consistency as understood in the classic ACID model of transaction processing. Integrity is violated when a message is actively modified in transit. Information security systems typically provide message integrity in addition to data confidentiality. Availability. For any information system to serve its purpose, the information must be available when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks9 . Authenticity10 . In computing, e-business and information security it is necessary to ensure that the data, transactions, communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate that both parties involved are who they claim they are. 22 Non-repudiation. In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. Electronic commerce uses technology such as digital signatures and public key encryption11 to establish authenticity and non-repudiation.
Risk management
Risk management is the process of identifying vulnerabilities1 and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization.

There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing iterative2 process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerability emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made or act of nature) that has the potential to cause harm.

The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called residual risk.

A risk assessment3 is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis.

The research has shown that the most vulnerable point in most information systems is the human user, operator, designer. The practice of information security management recommends the following to be examined during a risk assessment:

security policy; organization of information security;

asset management4 ;

human resources security;

physical and environmental security;

communications and operations management;

access control;

information systems acquisition, development and maintenance; information security incident management5 ;

business continuity management;

regulatory compliance6 .

In broad terms, the risk management process consists of:

1. Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, other), supplies.

2. Conduct a threat assessment. Include: acts of nature, acts of war, accidents, malicious acts originating from inside or outside the organization.

3. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, quality control, technical security.

4. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis.

5. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset.

6. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost-effective protection without discernible loss of productivity.

For any given risk, Executive Management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or 24 out-sourcing7 to another business. The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. This is itself a potential risk.

When Management chooses to mitigate a risk, they will do so by implementing one or more of three different types of controls.

Administrative. Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on how the business is to be run and how day to day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards and guidelines that must be followed – the Payment Card Industry (PCI) Data Security Standard required by Visa and Master Card is such an example. Other examples of administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary policies. Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls. Administrative controls are of paramount importance.

Logical. Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. For example: passwords, network and host8 based firewalls9 , network intrusion detection systems, access control lists, and data encryption are logical controls. An important logical control that is frequently overlooked is the principle of least privilege. The principle of least privilege requires that an individual, program or system process is not granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read e-mail and surf the Web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, or they are promoted to a new position, or they transfer to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges which may no longer be necessary or appropriate.

Physical. Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities. For example: doors, locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and work place into functional areas are also physical controls.

An important physical control that is frequently overlooked is the separation of duties. Separation of duties ensures that an individual cannot complete a critical task by himself. For example: an employee who submits a request for reimbursement10 should not also be able to authorize payment or print the check. An applications programmer should not also be the server administrator or the database administrator – these roles and responsibilities must be separated from one another. 25
Defense in-depth
Information security must protect information throughout the life span of the information, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its lifetime, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own protection mechanisms. The building up, layering2 on and overlapping3 of security measures is called defense in depth. The strength of any system is no greater than its weakest link. Using a defence in-depth strategy, should one defensive measure fail, there are other defensive measures in place that continue to provide protection.

The three types of the above mentioned controls (administrative, logical, and physical) can be used to form the basis upon which to build a defense-in-depth strategy. With this approach, defense-in-depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense-in- depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, and network security, hostbased security and application security forming the outermost layers of the onion. Both perspectives are equally valid and each provides valuable insight into the implementation of a good defense-in-depth strategy. 26 Security classification for information. An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification.

The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification.

Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information.

The type of information security classification labels selected and used will depend on the nature of the organization, with examples being:

In the business sector, labels such as: Public, Sensitive, Private, Confidential.

In the government sector, labels such as: Unclassified, Sensitive But Unclassified, Restricted, Confidential, Secret, Top Secret and their non-English equivalents.

In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber and Red. All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification of a particular information asset has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place.

Access control. Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected – the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation, on which access control mechanisms are built, start with identification4 and authentication5 .

Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. 27

Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe (a claim of identity). The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be.

There are three different types of information that can be used for authentication: something you know, something you have, or something you are. Examples of something you know include such things as a PIN, a password, or your mother's maiden name. Examples of something you have include a driver's license or a magnetic Something you are refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and retina (eye) scans. Strong authentication requires providing information from two of the three different types of authentication information. For example, something you know plus something you have. This is called two factor authentication.

On computer systems in use today, the Username is the most common form of identification and the Password is the most common form of authentication. Usernames and passwords have served their purpose but in our modern world they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated authentication mechanisms.

After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization6 .

Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies.

Different computing systems are equipped with different kinds of access control mechanisms - some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control or it may be derived from a combination of the three approaches.

The non-discretionary7 approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary8 approach gives the creator or owner of the information resource the ability to control access to those resources. In the Mandatory access control9 approach, access is granted or denied basing upon the security classification assigned to the information resource.
Digital mapping
Digital mapping (also called digital cartography) is the process by which a collection of data is compiled and formatted into a virtual image. The primary function of this technology is to produce maps that give accurate representations of a particular area, detailing major road arteries and other points of interest. The technology also allows the calculation of distances from one place to another. Though digital mapping can be found in a variety of computer applications, such as Google 50 Earth, the main use of these maps is with the Global Positioning System, or GPS satellite network, used in standard automotive navigation systems.

History. The roots of digital mapping lie within traditional paper maps. Paper maps provide basic landscapes similar to digitized road maps, yet are often cumbersome, cover only a designated area, and lack many specific details such as road blocks. In addition, there is no way to “update” a paper map except to obtain a new version. On the other hand, digital maps, in many cases, can be updated through synchronization with updates from company servers. Early digital maps had the same basic functionality as paper maps – that is, they provided a “virtual view” of roads generally outlined by the terrain encompassing the surrounding area. However, as digital maps have grown with the expansion of GPS technology in the past decade, live traffic updates, points of interest and service locations have been added to enhance digital maps to be more “user conscious”. Traditional “virtual views” are now only part of digital mapping. In many cases, users can choose between virtual maps, satellite (aerial views), and hybrid (a combination of virtual map and aerial views) views. With the ability to update and expand digital mapping devices, newly constructed roads and places can be added to appear on maps.

Data Collection. Digital maps heavily rely upon a vast amount of data collected over time. Most of the information that comprise digital maps is the culmination of satellite imagery1 as well as street level information. Maps must be updated frequently to provide users with the most accurate reflection of a location. While there is a wide spectrum on companies that specialize in digital mapping, the basic premise is that digital maps will accurately portray roads as they actually appear to give "life-like experiences2 ".

Functionality and Use. Computer programs and applications such as Google Earth and Google Maps provide map views from space and street level of much of the world. Used primarily for recreational use, Google Earth provides digital mapping in personal applications, such as tracking distances or finding locations. The development of mobile computing (tablet PCs3 , laptops, etc.) has recently (since about 2000) spurred the use of digital mapping in the sciences and applied sciences. As of 2009, science fields that use digital mapping technology include geology, engineering, architecture, land surveying, mining, forestry, environment, and archaeology. The principal use by which digital mapping has grown in the past decade has been its connection to Global Positioning System (GPS) technology. GPS is the foundation behind digital mapping navigation systems. The coordinates and position as well as atomic time obtained by a terrestrial GPS receiver from GPS satellites orbiting the Earth interact together to provide the digital mapping programming with points of origin in addition to the destination points needed to calculate distance. This information is then analyzed and compiled to create a map that provides the easiest and most efficient way to reach a destination. More technically speaking, the device operates in the following manner: GPS receivers collect data from "at least twenty-four GPS satellites" orbiting the Earth, calculating position in three dimensions.

1. The GPS receiver then utilizes position to provide GPS coordinates, or exact points of latitudinal and longitudinal direction from GPS satellites.

2. The points, or coordinates, output an accurate range between approximately "10-20 meters" of the actual location.

3. The beginning point, entered via GPS coordinates, and the ending point, (address or coordinates) input by the user, are then entered into the digital map.

4. The map outputs a real-time visual representation of the route. The map then moves along the path of the driver.

5. If the driver drifts from the designated route, the navigation system will use the current coordinates to recalculate a route to the destination location.
Computers
Generally, any device that can perform numerical calculations, even an adding machine, may be called a computer but nowadays this term is used especially for digital computers. Computers that once weighed 30 tons now may weigh as little as 1.8 kilograms. Microchips and microprocessors have considerably reduced the cost of the electronic components required in a computer. Computers come in many sizes and shapes such as special-purpose, laptop, desktop, minicomputers, supercomputers.

Special-purpose computers can perform specific tasks and their operations are limited to the programmes built into their microchips. There computers are the basis for electronic calculators and can be found in thousands of electronic products, including digital watches and automobiles. Basically, these computers do the ordinary arithmetic operations such as addition, subtraction, multiplication and division.

General-purpose computers are much more powerful because they can accept new sets of instructions. The smallest fully functional computers are called laptop computers. Most of the general-purpose computers known as personal or desktop computers can perform almost 5 million operations per second.

Today's personal computers are known to be used for different purposes: for testing new theories or models that cannot be examined with experiments, as valuable educational tools due to various encyclopedias, dictionaries, educational programmes, in book-keeping, accounting and management. Proper application of computing equipment in different industries is likely to result in proper management, effective distribution of materials and resources, more efficient production and trade.

Minicomputers are high-speed computers that have greater data manipulating capabilities than personal computers do and that can be used simultaneously by many users. These machines are primarily used by larger businesses or by large research and university centers. The speed and power of supercomputers, the highest class of computers, are almost beyond comprehension, and their capabilities are continually being improved. The most complex of these machines can perform nearly 32 billion calculations per second and store 1 billion characters in memory at one time, and can do in one hour what a desktop computer would take 40 years to do. They are used commonly by government agencies and large research centers. Linking together networks of several small computer centers and programming them to use a common language has enabled engineers to create the supercomputer. The aim of this technology is to elaborate a machine that could perform a trillion calculations per second.

 

Digital computers


There are two fundamentally different types of computers: analog and digital. The former type solver problems by using continuously changing data such as voltage. In current usage, the term "computer" usually refers to high-speed digital computers. These computers are playing an increasing role in all branches of the economy.

Digital computers based on manipulating discrete binary digits (1s and 0s). They are generally more effective than analog computers for four principal reasons: they are faster; they are not so susceptible to signal interference; they can transfer huge data bases more accurately; and their coded binary data are easier to store and retrieve than the analog signals.

For all their apparent complexity, digital computers are considered to be simple machines. Digital computers are able to recognize only two states in each of its millions of switches, "on" or "off", or high voltage or low voltage. By assigning binary numbers to there states, 1 for "on" and 0 for "off", and linking many switches together, a computer can represent any type of data from numbers to letters and musical notes. It is this process of recognizing signals that is known as digitization. The real power of a computer depends on the speed with which it checks switches per second. The more switches a computer checks in each cycle, the more data it can recognize at one time and the faster it can operate, each switch being called a binary digit or bit.

A digital computer is a complex system of four functionally different elements: 1) the central processing unit (CPU), 2) input devices, 3) memory-storage devices called disk drives, 4) output devices. These physical parts and all their physical components are called hardware.

The power of computers greatly on the characteristics of memory-storage devices. Most digital computers store data both internally, in what is called main memory, and externally, on auxiliary storage units. As a computer processes data and instructions, it temporarily stores information internally on special memory microchips. Auxiliary storage units supplement the main memory when programmes are too large and they also offer a more reliable method for storing data. There exist different kinds of auxiliary storage devices, removable magnetic disks being the most widely used. They can store up to 100 megabytes of data on one disk, a byte being known as the basic unit of data storage.

Output devices let the user see the results of the computer's data processing. Being the most commonly used output device, the monitor accepts video signals from a computer and shows different kinds of information such as text, formulas and graphics on its screen. With the help of various printers information stored in one of the computer's memory systems can be easily printed on paper in a desired number of copies.

Programmes, also called software, are detailed sequences of instructions that direct the computer hardware to perform useful operations. Due to a computer's operating system hardware and software systems can work simultaneously. An operating system consists of a number of programmes coordinating operations, translating the data from different input and output devices, regulating data storage in memory, transferring tasks to different processors, and providing functions that help programmers to write software. In large corporations software is often written by groups of experienced programmers, each person focusing on a specific aspect of the total project. For this reason, scientific and industrial software sometimes costs much more than do the computers on which the programmes run.
The first hackers
(1) The first "hackers" were students at the Massachusetts Institute of Technology (MIT) who belonged to the TMRC (Tech Model Railroad Club). Some of the members really built model trains. But many were more interested in the wires and circuits underneath the track platform. Spending hours at TMRC creating better circuitry was called "a mere hack." Those members who were interested in creating innovative, stylistic, and technically clever circuits called themselves (with pride) hackers.

(2) During the spring of 1959, a new course was offered at MIT, a freshman programming class. Soon the hackers of the railroad club were spending days, hours, and nights hacking away at their computer, an IBM 704. Instead of creating a better circuit, their hack became creating faster, more efficient program - with the least

number of lines of code. Eventually they formed a group and created the first set of hacker's rules, called the Hacker's Ethic.

(3) Steven Levy, in his book Hackers, presented the rules:

Rule 1: Access to computers - and anything, which might teach you, something about the way the world works - should be unlimited and total.

Rule 2: All information should be free.

Rule 3: Mistrust authority - promote decentralization.

Rule 4: Hackers should be judged by their hacking, not bogus criteria such as degrees, race, or position.

Rule 5: You can create art and beauty on a computer.

Rule 6: Computers can change your life for the better.

(4) These rules made programming at MIT's Artificial Intelligence Laboratory a challenging, all encompassing endeavor. Just for the exhilaration of programming, students in the Al Lab would write a new program to perform even the smallest tasks. The program would be made available to others who would try to perform the same task with fewer instructions. The act of making the computer work more elegantly was, to a bonafide hacker, awe-inspiring.

(5) Hackers were given free reign on the computer by two AI Lab professors, "Uncle" John McCarthy and Marvin Minsky, who realized that hacking created new insights. Over the years, the AI Lab created many innovations: LIFE, a game about survival; LISP, a new kind of programming language; the first computer chess game; The CAVE, the first computer adventure; and SPACEWAR, the first video game.



Достарыңызбен бөлісу:
1   2   3   4   5   6   7   8   9   10   ...   65




©engime.org 2024
әкімшілігінің қараңыз

    Басты бет