望麓自卑—湖南大学最具潜力的校园传媒

 找回密码
 注册

QQ登录

只需一步,快速开始

查看: 5209|回复: 11

[资源共享] 【计算机】不了解方向的请进,每层楼有一个CS方向(资料:wikipedia)

[复制链接]
发表于 2007-9-17 16:45:40 | 显示全部楼层 |阅读模式
参考:http://www.taisha.org/bbs/thread-495121-1-1.html
Computer science
Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems.[1][2][3] Computer science has many sub-fields; some emphasize the computation of specific results (such as computer graphics), while others relate to properties of computational problems (such as computational complexity theory). Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems. A further subfield, human-computer interaction, focuses on the challenges in making computers and computations useful, usable and universally accessible to people.

Contents
    1 History2 Major achievements3 Relationship with other fields4 Fields of computer science
      4.1 Mathematical foundations4.2 Theory of computation4.3 Algorithms and data structures4.4 Programming languages and compilers4.5 Concurrent, parallel, and distribute systems 4.6 Software engineering4.7 System architecture4.8 Communications4.9 Databases4.10 Artificial intelligence4.11 Visual rendering (or Computer graphics)4.12 Human-Computer Interaction4.13 Scientific computing4.14 Didactics of computer science / Didactics of Informatics
    5 Computer science education6 See also7 References8 External links


[edit] History
Main article: History of computer scienceThe history of computer science predates the invention of the modern digital computer by many centuries. Machines for calculating fixed numerical tasks, such as the abacus, have existed since antiquity. Wilhelm Schickard built the first mechanical calculator in 1623.[4] Charles Babbage designed a difference engine in Victorian times (between 1837 and 1901)[5] helped by Ada Lovelace.[6] Around 1900 the IBM corporation sold punch-card machines.[7] However all of these machines were constrained to perform a single task, or at best, some subset of all possible tasks.
During the 1940s, as newer and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1960s, with the creation of the first computer science departments and degree programs.[8] Since practical computers became available, many applications of computing have become distinct areas of study in their own right.

[edit] Major achievements
[sup]This short section requires expansion.[/sup]
German military used the Enigma machine during World War II for communication they thought to be secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[9]Despite its relatively short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society. These include:
Applications within computer science Applications outside of computing
[edit] Relationship with other fields
Wikiquote has a collection of quotations related to: Edsger DijkstraDespite its name, much of computer science does not involve the study of computers themselves. Because of this several alternative names have been proposed. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution applying the datalogy term was DIKU, the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACMturingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[13] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[14] Recently the term computics has been suggested.[15]
In fact, the renowned computer scientist Edsger Dijkstra is often quoted as saying, \"Computer science is no more about computers than astronomy is about telescopes.\" The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. Computer science is sometimes criticized as being insufficiently scientific, a view espoused in the statement \"Science is to computer science as hydrodynamics is to plumbing\" credited to Stan Kelly-Bootle[16] and others. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research has also often crossed into other disciplines, such as artificial intelligence, cognitive science, physics (see quantum computing), and linguistics.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines.[8] Early computer science was strongly influenced by the work of mathematicians such as Kurt G鰀el and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term \"software engineering\" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[17]

[edit] Fields of computer science
Computer science searches for concepts and formal proofs to explain and describe computational systems of interest. As with all sciences, these theories can then be utilised to synthesize practical engineering applications, which in turn may suggest new systems to be studied and analysed. While the ACM Computing Classification System can be used to split computer science up into different topics of fields a more descriptive break down follows:

[edit] Mathematical foundations
Mathematical logic Boolean logic and other ways of modeling logical queries; the uses and limitations of formal proof methods. Number theory Theory of proofs and heuristics for finding proofs in the simple domain of integers. Used in cryptography as well as a test domain in artificial intelligence. Graph theory Foundations for data structures and searching algorithms. Type Theory Formal analysis of the types of data, and the use of these types to understand properties of programs — especially program safety. Category Theory Category theory provides a means of capturing all of math and computation in a single synthesis. Computational geometry The study of algorithms to solve problems stated in terms of geometry
[edit] Theory of computation
Main article: Theory of computationAutomata theory Different logical structures for solving problems. Computability theory What is calculable with the current models of computers. Proofs developed by Alan Turing and others provide insight into the possibilities of what can be computed and what can not. Computational complexity theory Fundamental bounds (especially time and storage space) on classes of computations. Quantum computing theory Representation and manipulation of data using the quantum properties of particles and quantum mechanism.
[edit] Algorithms and data structures
Analysis of algorithms Time and space complexity of algorithms. Algorithms Formal logical processes used for computation, and the efficiency of these processes. Data structures The organization of and rules for the manipulation of data.
[edit] Programming languages and compilers
Compilers Ways of translating computer programs, usually from higher level languages to lower level ones. Interpreters A program that takes in as input a computer program and executes it. Programming languages Formal language paradigms for expressing algorithms, and the properties of these languages (e.g. what problems they are suited to solve).
[edit] Concurrent, parallel, and distributed systems
Concurrency The theory and practice of simultaneous computation; data safety in any multitasking or multithreaded environment. Distributed computing Computing using multiple computing devices over a network to accomplish a common objective or task and thereby reducing the latency involved in single processor contributions for any task. Parallel computing Computing using multiple concurrent threads of execution.
[edit] Software engineering
Algorithm design Using ideas from algorithm theory to creatively design solutions to real tasks Computer programming The practice of using a programming language to implement algorithms Formal methods Mathematical approaches for describing and reasoning about software designs. Reverse engineering The application of the scientific method to the understanding of arbitrary existing software Software development The principles and practice of designing, developing, and testing programs, as well as proper engineering practices.
[edit] System architecture
Computer architecture The design, organization, optimization and verification of a computer system, mostly about CPUs and Memory subsystem (and the bus connecting them). Computer organization The implementation of computer architectures, in terms of descriptions of their specific electrical circuitry Operating systems Systems for managing computer programs and providing the basis of a useable system.
[edit] Communications
Computer audio Algorithms and data structures for the creation, manipulation, storage, and transmission of digital audio recordings. Also important in voice recognition applications. Networking Algorithms and protocols for reliably communicating data across different shared or dedicated media, often including error correction. Cryptography Applies results from complexity, probability and number theory to invent and break codes.
[edit] Databases
Data mining Data mining is the extracting of the relevant data from all the sources of data Relational databases Study of algorithms for searching and processing information in documents and databases; closely related to information retrieval.
[edit] Artificial intelligence
Artificial intelligence The implementation and study of systems that exhibit an autonomous intelligence or behaviour of their own. Artificial Life The study of digital organisms to learn about biological systems and evolution. Automated reasoning Solving engines, such as used in Prolog, which produce steps to a result given a query on a fact and rule database. Computer vision Algorithms for identifying three dimensional objects from one or more two dimensional pictures. Machine learning Automated creation of a set of rules and axioms based on input. Natural language processing/Computational linguistics Automated understanding and generation of human language Robotics Algorithms for controlling the behavior of robots.
[edit] Visual rendering (or Computer graphics)
Computer graphics Algorithms both for generating visual images synthetically, and for integrating or altering visual and spatial information sampled from the real world. Image processing Determining information from an image through computation.
[edit] Human-Computer Interaction
Human computer interaction The study of making computers and computations useful, usable and universally accessible to people, including the study and design of computer interfaces through which people use computers.
[edit] Scientific computing
Bioinformatics The use of computer science to maintain, analyse, and store biological data, and to assist in solving biological problems such as Protein folding, function prediction and Phylogeny. Cognitive Science Computational modelling of real minds Computational chemistry Computational modelling of theoretical chemistry in order to determine chemical structures and properties Computational neuroscience Computational modelling of real brains Computational physics Numerical simulations of large non-analytic systems Numerical algorithms Algorithms for the numerical solution of mathematical problems such as root-finding, integration, the solution of ordinary differential equations and the approximation/evaluation of special functions. Symbolic mathematics Manipulation and solution of expressions in symbolic form, also known as Computer algebra.
Didactics of computer science / Didactics of Informatics
The subfield didactics of computer science focuses on cognitive approaches of developing competencies of computer science and specific strategies for analysis, design, implementation and evaluation of excellent lessons in computer science.
Since 1960 experts of higher education, the pioneers of didactics of computer science, are developing guidelines and curricula recommendations.
Ten years later computer science has been a subject of secondary education. Didactics of computer science became also a study subject of teacher education.
At present, the educational aims of the subject computer science at schools are completely changing from programming of small imperative solutions to modelling, construction and deconstruction of complex and object oriented systems of computer science. But there is a big gap between the didactic needs and the published research results in this field, e. g.:
    [li]The Educational Value of Informatics, [/li][li]Fundamental Ideas of Informatics, [/li][li]Didactic Systems of Informatics, [/li][li]Understanding of Informatics Systems, [/li][li]Educational Standards of Informatics, [/li][li]International Curricula. [/li]

Computer science education
Some universities teach computer science as a theoretical study of computation and algorithmic reasoning. These programs often feature the theory of computation, analysis of algorithms, formal methods, concurrency theory, databases, computer graphics and systems analysis, among others. They typically also teach computer programming, but treat it as a vessel for the support of other fields of computer science rather than a central focus of high-level study.
Other colleges and universities, as well as secondary schools and vocational programs that teach computer science, emphasize the practice of advanced computer programming rather than the theory of algorithms and computation in their computer science curricula. Such curricula tend to focus on those skills that are important to workers entering the software industry. The practical aspects of computer programming are often referred to as software engineering. However, there is a lot of disagreement over what the term \"software engineering\" actually means, and whether it is the same thing as programming.
See Peter J. Denning, Great principles in computing curricula, Technical Symposium on Computer Science Education, 2004.
 楼主| 发表于 2007-9-17 16:52:14 | 显示全部楼层
Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.[1] The term software engineering was popularized during the 1968 NATO Software Engineering Conference (held in Garmisch, Germany) by its chairman F.L. Bauer, and has been in widespread use since. The discipline of software engineering encompasses knowledge, tools, and methods for defining software requirements, and performing software design, software construction, software testing, and software maintenance tasks.[2] Software engineering also draws on knowledge from fields such as computer engineering, computer science, management, mathematics, project management, quality management, software ergonomics, and systems engineering.[2]

As of 2004, the U. S. Bureau of Labor Statistics counts 760,840 software engineers holding jobs in the U.S.; for comparison, in the U.S. there are some 1.4 million practitioners employed in all other engineering disciplines combined.[3] The term software engineer is used very liberally in the corporate world. Very few of the practicing software engineers actually hold engineering degrees from accredited universities. There are estimated to be about 1.5 million practitioners in the E.U., Asia, and elsewhere[citation needed]. SE pioneers include Barry Boehm, Fred Brooks, C. A. R. Hoare, and David Parnas.

Contents [hide]
1 Nature
1.1 Definition
1.1.1 Other meanings
2 Purpose
3 Technologies and practices
4 The software engineering profession
4.1 Debate over the term 'engineering'
4.2 Education
4.3 Employment
4.4 Certification
4.5 Impact of globalization
4.6 Comparing related fields
5 History
5.1 60 year time line
5.2 Current trends in software engineering
5.3 Software engineering today
6 Conferences, organizations and publications
6.1 Conferences
6.2 Organizations
6.3 Publications
7 See also
8 References
9 Further reading
10 External links



[edit] Nature
David Parnas has said that software engineering is, in fact, a form of engineering.[4].[5] Steve McConnell has said that it is not, but that it should be.[6] Donald Knuth has said that programming is an art and a science.[7]


The U.S. Bureau of Labor Statistics classifies computer software engineers as a subcategory of "computer specialists", along with occupations such as computer scientist, programmer, and network administrator.[8] The BLS classifies all other engineering disciplines, including computer hardware engineers, as "engineers".[9]

The U.K. has seen the alignment of the Information Technology Professional and the Engineering Professionals.[10] Software engineering in Canada has seen some contests in the courts over the use of the title "Software Engineer"[11]


[edit] Definition
Typical formal definitions of software engineering are

"the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software".[1]
"an engineering discipline that is concerned with all aspects of software production"[12]
"the establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines"[13]

[edit] Other meanings
As Dijkstra pointed out, the terms software engineering and software engineer have, at times, also been misused in a much wider sense, particularly in America.[14] The term has been used less formally:

as the informal contemporary term for the broad range of activities that was formerly called programming and systems analysis;[15]
as the broad term for all aspects of the practice of computer programming, as opposed to the theory of computer programming, which is called computer science;[16]
as the term embodying the advocacy of a specific approach to computer programming, one that urges that it be treated as an engineering discipline rather than an art or a craft, and advocates the codification of recommended practices in the form of software engineering methodologies.[17]

[edit] Purpose
Software is often found in products and situations where very high reliability is expected, even under demanding conditions, such as monitoring and controlling nuclear power plants, or keeping a modern airliner aloft[18]. Such applications contain millions of lines of code, making them comparable in complexity to the most complex modern machines. For example, a modern airliner has several million physical parts[19] (and the space shuttle about ten million parts[20]), while the software for such an airliner can run to 4 million lines of code.[21]


[edit] Technologies and practices
Main article: Software development process
Software engineers advocate many different technologies and practices, with much disagreement, which has originated a debate that has gone on for over 60 years. Software engineers use a wide variety of technologies: compilers, code repositories, text editors. They also use a wide variety of practices to carry out and coordinate their efforts: pair programming, code reviews and daily stand up meetings.

In spite of the enormous economic growth and productivity gains enabled by software, persistent complaints about the quality of software remain.[citation needed]

See also: Debates within software engineering

[edit] The software engineering profession

[edit] Debate over the term 'engineering'
Some people believe that software development is a more appropriate term than software engineering for the process of creating software. Pete McBreen, (author of "Software Craftsmanship: The New Imperative" (ISBN 0-201-73386-2)), argues that the term Software Engineering implies levels of rigor and proven processes that are not appropriate for all types of software development. He argues strongly for 'craftsmanship' as a more appropriate metaphor because that term brings into sharper focus the skills of the developer as the key to success instead of the "manufacturing" process. Using a more traditional comparison, just as not everyone who works in construction is a civil engineer, not everyone who can write code is a software engineer.

Some people dispute the notion that the field is mature enough to warrant the title "engineering"[citation needed]. In each of the last few decades, at least one radical new approach has entered the mainstream of software development (e.g. Structured Programming, Object Orientation, ... ), implying that the field is still changing too rapidly to be considered an engineering discipline. Other people would argue that the supposedly radical new approaches are actually evolutionary rather than revolutionary, the mere introduction of new tools rather than fundamental changes[citation needed].


[edit] Education
This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. (help, get involved!)
Unverifiable material may be challenged and removed. (tagged since September 2006)

People from many different educational backgrounds make important contributions to SE. Today, software engineers earn software engineering, computer engineering or computer science degrees. However, there are a great number of people in the industry without engineering degrees earned from accredited universities, so the use of the term "software engineer" is somewhat ambiguous.

Software degrees in the U.S. and Canada
About half of all practitioners today have computer science degrees. A small, but growing, number of practitioners have software engineering degrees. In 1996, Rochester Institute of Technology established the first BSSE degree program in the United States but did not obtain ABET until 2003, the same time as Clarkson University, Milwaukee School of Engineering and Mississippi State University [22] . Since then, software engineering undergraduate degrees have been established at many universities. A standard international curriculum for undergraduate software engineering degrees was recently defined by the CCSE. As of 2004, in the U.S., about 50 universities offer software engineering degrees, which teach both computer science and engineering principles and practices. The first graduate software engineering degree (MSSE) was established at Seattle University in 1979. Since then graduate software engineering degrees have been made available from many more universities. Likewise in Canada, the Canadian Engineering Accreditation Board (CEAB) of the Canadian Council of Professional Engineers has recognized software engineering programs in engineering faculties such as McMaster University, the University of Waterloo, the University of Ottawa and the University of Western Ontario since 2001.[23][24]
In 1998, the prestigious US Naval Postgraduate School (NPS) established the first doctoral program in Software Engineering in the world.[citation needed] As of the beginning of 2006, thirteen students had graduated from the program and assumed senior-level leadership roles in the Department of Defense research and development community.[citation needed]
Domain degrees
Some practitioners have degrees in application domains, bringing important domain knowledge and experience to projects. In MIS, some practitioners have business degrees. In embedded systems, some practitioners have electrical or computer engineering degrees, because embedded software often requires a detailed understanding of hardware. In medical software, some practitioners have medical informatics, general medical, or biology degrees.
Other degrees
Some practitioners have mathematics, science, engineering, or other technical degrees. Some have philosophy (logic in particular) or other non-technical degrees. And, some have no degrees. For instance, Barry Boehm earned degrees in mathematics.

[edit] Employment
See also: software engineering demographics
Most software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations required software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Specializations include: in industry (analysts, architects, developers, testers, technical support, managers) and in academia (educators, researchers).

There is considerable debate over the future employment prospects for Software Engineers and other IT Professionals. For example, an online futures market called the Future of IT Jobs in America attempts to answer whether there will be more IT jobs, including software engineers, in 2012 than there were in 2002.


[edit] Certification
Certification of software engineers is a contentious issue.[citation needed] Some see it as a tool to improve professional practice.[citation needed]

Most successful certification programs in the software industry are oriented toward specific technologies, and are managed by the vendors of these technologies.[citation needed] These certification programs are tailored to the institutions that would employ people who use these technologies.

The ACM had a professional certification program in the early 1980s, which was discontinued due to lack of interest.[citation needed] As of 2006, the IEEE had certified over 575 software professionals.[25] In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP).[26]


[edit] Impact of globalization
Many students in the developed world have avoided degrees related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers [3]. Although government statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected [4][5]. Often one is expected to start out as a computer programmer before being promoted to software engineer. Thus, the career path to software engineering may be rough, especially during recessions.

Some career counselors suggest a student also focus on "people skills" and business skills rather than purely technical skills because such "soft skills" are allegedly more difficult to offshore [6]. It is the quasi-management aspects of software engineering that appear to be what has kept it from being impacted by globalization. [7]


[edit] Comparing related fields
Main article: Comparing software engineering and related fields
Many fields are closely related to software engineering; here are some key similarities and distinctions. Comparing SE with other fields helps explain what SE is and helps define what SE might or should become. There is considerable debate over which fields SE most resembles (or should most resemble). These complex and inexact comparisons explain why some see software engineering as its own field.


[edit] History
Main article: History of software engineering
Software engineering has a long evolving history. Both the tools that are used and the applications that are written have evolved over time. It seems likely that software engineering will continue evolving for many decades to come.


[edit] 60 year time line
1940s[citation needed]: First computer users wrote machine code by hand.
1950s: Early tools, such as macro assemblers and interpreters were created and widely used to improve productivity and quality[citation needed]. First-generation optimizing compilers[citation needed].
1960s: Second generation tools like optimizing compilers and inspections were being used to improve productivity and quality[citation needed]. The concept of software engineering was widely discussed[citation needed]. First really big (1000 programmer) projects[citation needed]. Commercial mainframes and custom software for big business. The influential 1968 NATO Conference on Software Engineering was held.
1970s: Collaborative software tools, such as Unix, code repositories, make, and so on. Minicomputers and the rise of small business software.
1980s: Personal computers and personal workstations became common. Commensurate rise of consumer software.
1990s: Object-oriented programming and agile processes like Extreme programming gained mainstream acceptance[citation needed]. Computer memory capacity sky-rocketed and prices dropped drastically[citation needed]. These new technologies allowed software to grow more complex[citation needed]. The WWW and hand-held computers made software even more widely available.
2000s: Managed code and interpreted platforms such as Java, .NET, Ruby, Python and PHP made writing software easier than ever before[citation needed]. Offshore outsourcing changed the nature and focus of software engineering careers.

[edit] Current trends in software engineering
Software engineering is a young discipline, and is still developing. The directions in which software engineering is developing include:

Aspects
Aspects help software engineers deal with -ilities by providing tools to add or remove boilerplate code from many areas in the source code. Aspects describe how all objects or functions should behave in particular circumstances. For example, aspects can add debugging, logging, or locking control into all objects of particular types. Researchers are currently working to understand how to use aspects to design general-purpose code. Related concepts include generative programming and templates.
Agile
Agile software development guides software development projects that evolve rapidly with changing expectations and competitive markets. Proponents of this method believe that heavy, document-driven processes (like TickIT, CMM and ISO 9000) are fading in importance[citation needed]. Some people believe that companies and agencies export many of the jobs that can be guided by heavy-weight processes[citation needed]. Related concepts include Extreme Programming and Lean software development.
Experimental
Experimental software engineering is a branch of software engineering interested in devising experiments on software, in collecting data from the experiments, and in devising laws and theories from this data. Proponents of this method advocate that the nature of software is such that we can advance the knowledge on software through experiments only[citation needed].
Model-driven
Model Driven Software Development uses (both textual and graphical) models as primary development artifacts. By means of model transformation and code generation a part or complete applications are generated.
Software Product Lines
Software Product Lines is a systematic way to produce families of software systems, instead of creating a succession of completely individual products. This method emphasizes extensive, systematic, formal code reuse, to try to industrialize the software development process.
The Future of Software Engineering conference (FOSE), held at ICSE 2000, documented the state of the art of SE in 2000 and listed many problems to be solved over the next decade. The FOSE tracks at the ICSE 2000 and the ICSE 2007 conferences also help identify the state of the art in software engineering. The Feyerabend project attempts to discover the future of software engineering by seeking and publishing innovative ideas.


[edit] Software engineering today
In 2006, Money Magazine and Salary.com rated software engineering as the best job in America in terms of growth, pay, stress levels, flexibility in hours and working environment, creativity, and how easy it is to enter and advance in the field.[27]

See also software engineering economics.


[edit] Conferences, organizations and publications

[edit] Conferences
This article/section is missing citations and/or needs footnotes.
Using inline citations helps guard against copyright violations and factual inaccuracies.

Several academic conferences devoted to software engineering are held every year. There are also many other academic conferences every year devoted to special topics within SE, such as programming languages, requirements, testing, and so on.

ICSE
The biggest and oldest conference devoted to software engineering is the International Conference on Software Engineering. This conference meets every year to discuss improvements in research, education, and practice.
COMPSAC
The Annual International Computer Software and Applications Conference was first held in Chicago in 1977 and is designated as the IEEE Computer Society signature conference on software technology and applications.
ESEC
The European Software Engineering Conference.
FSE
The Foundations of Software Engineering conference is held every year, alternating between Europe and North America. It emphasizes theoretical and foundational issues.
CUSEC
Conferences dedicated to inform undergraduate students like the annual Canadian University Software Engineering Conference are also very promising for the future generation. It is completely organized by undergraduate students and lets different Canadian universities interested in Software Engineering host the conference each year. Past guests include Kent Beck, Joel Spolsky, Philippe Kruchten, Hal Helms, Craig Larman, David Parnas as well as university professors and students.
SEPG
The annual Software Engineering Process Group conference, sponsored by the Carnegie Mellon Software Engineering Institute (SEI), is a conference and exhibit showcase for systems and software engineering professionals. The four-day event emphasizes systematic improvement of people, processes, and technology.
INFORMATICS-INFORMATIQUE
The annual Canadian information technology, data processing and software engineering symposium, sponsored by the Canadian Information Processing Society. First held in 1958.
ICALEPS
International Conference on Accelerator and Large Experimental Physics Control Systems Conference [8]. Biennial conference covering software engineering for large scale scientific control systems. First held in 1987.
APSEC
Asia Pacific Software Engineering Conference [9].
UYMS
National Software Engineering Symposium (in Turkish: Ulusal Yazilim Muhendisligi Sempozyumu) [10] (not available in English). Biennial symposium first held in İzmir, Turkey in 2003.

[edit] Organizations
Association for Computing Machinery (ACM)
Australian Computer Society (ACS)
British Computer Society (BCS)
Canadian Information Processing Society (CIPS) - Information Systems Professional certification.
IEEE Computer Society
Lero, the Irish Software Engineering Research Centre
Russian Software Developers Association (RUSSOFT)
Software Association of Oregon (SAO)
Software Engineering Institute (SEI)
Software Industry Professionals
The Safety and Reliability Society
 楼主| 发表于 2007-9-17 16:58:06 | 显示全部楼层
[[Computer engineering (also called Electronic and Computer engineering) is a discipline that combines elements of both electrical engineering and computer science.[1][/sup] Computer engineers are electrical engineers that have additional training in the areas of software design and hardware-software integration.[citation needed][/sup] In turn, they focus less on power electronics and physics. Computer engineers are involved in many aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This engineering monitor the many subsystems in motor vehicles).[2][/sup]
Usual tasks involving computer engineers include writing software and firmware for embedded microcontrollers, designing VLSI chips, designing analog sensors, designing mixed signal circuit boards, and designing operating systems.[citation needed][/sup] Computer engineers are also suited for robotics research,[citation needed][/sup] which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors.
The terms hardware engineering and hardware engineer are also used, referring to the opposite of software engineering.
Contents
[hide]

[edit] Computer engineering as an academic discipline
The first accredited computer engineering degree program in the United States was established at Case Western Reserve University in 1971; as of October 2004 there were 170 ABET-accredited computer engineering programs in the US.[3][/sup]
The Engineering Pathway's Computer Engineering Educationcommunity site provides a description of the ABET accreditation criteria in Computer Engineering, a digital library of educational resources in computer engineering that are tagged for relevancy to the ABET general criteria, as well as educational resources in other related interdisciplinary subjects. The Engineering Pathway is the engineering education wing of the National Science Digital Library (NSDL).
Due to increasing job requirements for engineers, who can design and manage all forms of computer systems used in industry, some tertiary institutions around the world offer a bachelor's degree generally called "computer engineering".[citation needed][/sup] Both computer engineering and electronic engineering programs include analog and digital circuit design in their curricula. As with most engineering disciplines, having a sound knowledge of mathematics and sciences is necessary for computer engineers.
In many institutions, computer engineering students are allowed to choose areas of in-depth study in their junior and senior year, as the full breadth of knowledge used in the design and application of computers is well beyond the scope of an undergraduate degree. The joint IEEE/ACM Curriculum Guidelines for Undergraduate Degree Programs in Computer Engineering defines the core knowledge areas of computer engineering as[4][/sup]
The breadth of disciplines studied in computer engineering is not limited to the above subjects but can include any subject found in engineering.

[edit] See also

[edit] References
 楼主| 发表于 2007-9-17 17:06:18 | 显示全部楼层
Computer graphics
From Wikipedia, the free encyclopedia
Jump to: navigation, searchThis article is about the scientific discipline of computer graphics. For a more general information on computer graphics and applications, see 2D computer graphics and 3D computer graphicsFor the journal by ACM SIGGRAPH, see Computer Graphics (Publication). Computer graphics is a sub-field of computer science and is concerned with digitally synthesizing and manipulating visual content. Although the term often refers to three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing. Computer graphics is often differentiated from the field of visualization, although the two have many similarities.
A broad classification of major subfields in computer graphics might be:
    [li]Geometry: studies ways to represent and process surfaces [/li][li]Animation: studies with ways to represent and manipulate motion [/li][li]Rendering: studies algorithms to reproduce light transport [/li][li]Imaging: studies image acquisition or image editing [/li]


The Utah teapot
Contents
[hide]

[edit] Definition
A term used to define virtual characteristics of visual integrity.

[edit] Geometry
The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on the exterior of the object, boundary representations are most common in computer graphics. Two dimensional surfaces are a good analogy for the objects most often used in graphics, though quite often these objects are non-manifold. Since surfaces are not finite, a discrete digital approximation is required: polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have been gaining some popularity in recent years (see the Symposium on Point-Based Graphics, for instance). These representations are Lagrangian, meaning the spatial locations of the samples are independent. In recent years, however, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example[1][/sup]).

[edit] Subfields
    [li]Constructive solid geometry - Process by which complicated objects are modelled with implicit geometric objects and boolean operations [/li][li]Discrete differential geometry - a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics[2][/sup]. [/li][li]Digital geometry processing - surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading [3][/sup][4][/sup][5][/sup]. [/li][li]Point-based graphics - a recent field which focuses on points as the fundamental representation of surfaces. [/li][li]Simulation (e.g. cloth modeling, animation of fluid dynamics, etc.) [/li][li]Subdivision surfaces [/li]

[edit] Animation
The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically most interest in this area has been focused on parametric and data-driven models, but in recent years physical simulation has experienced a renaissance due to the growing computational capacity of modern machines.

[edit] Rendering
Rendering converts a model into an image either by simulating light transport to get physically-based photorealistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). See Rendering (computer graphics) for more information.

[edit] Transport
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.

[edit] Scattering
Models of scattering and shading are used to describe the appearance of a surface. Although these issues may seem like a problems all on their own, they are studied almost exclusively within the context of rendering [citation needed][/sup]. Shading can be broken down into two orthogonal issues, which are often studied independently:
    [li]scattering - how light interacts with the surface at a given point [/li][li]shading - how material properties vary across the surface [/li]
The former problem refers to scattering, i.e., the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)

[edit] Other subfields

[edit] History
One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and hand — produced by Ed Catmull and Fred Parke at the University of Utah.
There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing,Symposium on Rendering, and Symposium on Computer Animation.
An extensive history of computer graphics can be found at this page.

[edit] Applications

[edit] Connected studies

[edit] Computer graphics research groups

[edit] Academia
The number of computer science departments with computer graphics groups has grown rapidly over the past two decades. A partial list of departments notably involved in graphics research includes:
 楼主| 发表于 2007-9-17 17:08:24 | 显示全部楼层
Computer programming
From Wikipedia, the free encyclopedia
Jump to: navigation, search“Programming” redirects here. For other uses, see Programming (disambiguation).[table=18em][tr][td]Software development process[/td][/tr][tr][td]Activities and steps[/td][/tr][tr][td]Requirements | Architecture | Implementation | Testing | Deployment[/td][/tr][tr][td]Models[/td][/tr][tr][td]Agile | Cleanroom | Iterative | RAD | RUP | Spiral | Waterfall | XP[/td][/tr][tr][td]Supporting disciplines[/td][/tr][tr][td]Configuration management | Documentation | Software quality assurance (SQA) | Project management | User experience design[/td][/tr][/table]Computer programming (often shortened to programming or coding) is the process of writing, testing, and maintaining the source code of computer programs. The source code is written in a programming language. This code may be a modification of existing source or something completely new, the purpose being to create a program that exhibits the desired behavior. The process of writing source code requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.
Within software engineering, programming (the implementation) is regarded as one phase in a software development process.
In some specialist applications or extreme situations a program may be written or modified (known as patching) by directly storing the numeric values of the machine code instructions to be executed into memory.
There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an engineering discipline.[1][/sup] Good programming is generally considered to be the measured application of all three: expert knowledge informing an elegant, efficient, and maintainable software solution (the criteria for "efficient" and "maintainable" vary considerably). The discipline differs from many other technical professions in that programmers generally do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers".
Another ongoing debate is the extent to which the programming language used in writing programs affects the form that the final program takes. This debate is analogous to that surrounding the Sapir-Whorf hypothesis in linguistics.[citation needed][/sup]
Contents
[hide]

[edit] Programmers
See Computer programmer to learn more about the process of computer programming. Computer programmers are those who write computer software. Their job usually involves:

[edit] Programming languages
Main article: Programming languageMain article: List of programming languagesDifferent programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute.

[edit] History of programming
Wired plug board for an IBM 402 Accounting Machine.The earliest programmable machine (that is a machine whose behavior can be controlled by changes to a "program") was Al-Jazari's programmable humanoid robot in 1206. Al-Jazari's robot was originally a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties. His mechanism had a a programmable drum machine with pegs (cams) that bump into little levers that operate the percussion. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[2][/sup]
The Jacquard Loom, developed in 1801, is often quoted as a source of prior art. The machine used a series of pasteboard cards with holes punched in them. The hole pattern represented the pattern that the loom had to follow in weaving cloth. The loom could produce entirely different weaves using different sets of cards. The use of punched cards was also adopted by Charles Babbage around 1830, to control his Analytical Engine.
This innovation was later refined by Herman Hollerith who, in 1896 founded the Tabulating Machine Company (which became IBM). He invented the Hollerith punched card, the card reader, and the key punch machine. These inventions were the foundation of the modern information processing industry. The addition of a plug-board to his 1906 Type I Tabulator allowed it to do different jobs without having to be rebuilt (the first step toward programming). By the late 1940s there were a variety of plug-board programmable machines, called unit record equipment, to perform data processing tasks (card reading). The early computers were also programmed using plug-boards.
[url=http://en.wikipedia.org/wiki/ImageunchCardDecks.agr.jpg][/url] [url=http://en.wikipedia.org/wiki/ImageunchCardDecks.agr.jpg][/url]A box of punch cards with several program decks.The invention of the Von Neumann architecture allowed programs to be stored in computer memory. Early programs had to be painstakingly crafted using the instructions of the particular machine, often in binary notation. Every model of computer would be likely to need different instructions to do the same task. Later assembly languages were developed that let the programmer specify each instruction in a text format, entering abbreviations for each operation code instead of a number and specifying addresses in symbolic form (e.g. ADD X, TOTAL). In 1954 Fortran, the first higher level programming language, was invented. This allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, was converted into machine instructions using a special program called a compiler. Many other languages were developed, including ones for commercial programming, such as COBOL. Programs were mostly still entered using punch cards or paper tape. (See computer programming in the punch card era). By the late-60s, data storage devices and computer terminals became inexpensive enough so programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punch cards.
As time has progressed computers have made giant leaps in the area of processing power. This has brought about newer programming languages that are more abstracted from the underlying hardware. Although these more abstracted languages require additional overhead, in most cases the huge increase in speed of modern computers has brought about little performance decrease compared to earlier counterparts. The benefits of these more abstracted languages is that they allow both an easier learning curve for people less familiar with the older lower-level programming languages, and they also allow a more experienced programmer to develop simple applications quickly. Despite these benefits, large complicated programs, and programs that are more dependent on speed still require the faster and relatively lower-level languages with todays hardware. (The same concerns were raised about the original Fortran language.)
Throughout the second half of the twentieth century, programming was an attractive career in most developed countries. Some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities in less developed areas. It is unclear how far this trend will continue and how deeply it will impact programmer wages and opportunities. Despite the "outsourcing trend" it can be argued that some of the richest persons on the globe are programmers by profession. Examples: Larry Page and Sergey Brin (Google), Steve Wozniak (Apple Inc.), Hasso Plattner (SAP) and so on. Programming is clearly a leading-edge craftsmanship that continues to reward its practitioners both in countries such as India and in developed countries like the United States and Germany.

[edit] Modern programming

[edit] Algorithmic Complexity
The academic field and engineering practice of computer programming are largely concerned with discovering and implementing the most efficient algorithms for a given class of problem. For this purpose, algorithms are classified into orders using so-called Big O notation, O(n), which expresses resource use, such as execution time and memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to their circumstances.
Research in computer programming includes investigation into the unsolved proposition that P, the class of algorithms which can be deterministically solved in polynomial time with respect to an input, is not equal to NP, the class of algorithms for which no polynomial-time solutions are known. Work has shown that many NP algorithms can be transformed, in polynomial time, into others, such as the Travelling salesman problem, thus establishing a large class of "hard" problems which are for the purposes of analysis, equivalent.

[edit] Methodologies
The first step in every software development project should be requirements analysis, followed by modeling, implementation, and failure elimination (debugging). There exist a lot of differing approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.
Debugging is most often done with IDEs like Visual Studio, and Eclipse. Separate debuggers like gdb are also used.

[edit] Measuring language usage
It is very difficult to determine what are the most popular of modern programming languages. Some languages are very popular for particular kinds of applications (e.g., COBOL is still strong in the corporate data center, often on large mainframes, FORTRAN in engineering applications, and C in embedded applications), while some languages are regularly used to write many different kinds of applications.
Methods of measuring language popularity include: counting the number of job advertisements that mention the language, the number of books teaching the language that are sold (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).

[edit] Debugging
Debugging is a very important task for every programmer, because an erroneous program is often useless. Languages like C++ and Assembler are very challenging even to expert programmers because of failure modes like buffer overruns, bad pointers or uninitialized memory. A buffer overrun can damage adjacent memory regions and cause a failure in a totally different program line. Because of those memory issues tools like Valgrind, Purify or Boundschecker are virtually a necessity for modern software development in the C++ language. Languages such as Java, C#, PHP and Python protect the programmer from most of these runtime failure modes, but this may come at the price of a dramatically lower execution speed of the resulting program. This is acceptable for applications where execution speed is determined by other considerations such as database access or file I/O. The exact cost will depend upon specific implementation details. Modern Java virtual machines and .NET Common Language Runtime, for example, use a variety of sophisticated optimizations, including runtime conversion of interpreted instructions to native machine code.

[edit] Notes
 楼主| 发表于 2007-9-17 17:09:59 | 显示全部楼层
Computing
From Wikipedia, the free encyclopedia
• Ten things you didn't know about images on Wikipedia •Jump to: navigation, search
For the formal concept of computation, see computation.

RAM (Random Access Memory)Look up computing in
Wiktionary, the free dictionary. Information technology Portal
The term computing is synonymous with counting and calculating. Originally, people that performed these functions were known as computers. Today it refers to a science and technology that deals with the computation and the manipulation of symbols. "Computing" also refers to the operation and usage of computing machines, the electrical processes carried out within the computing hardware itself, and the theoretical concepts governing them (computer science).

Contents [hide]
1 Definitions
2 Science and theory
3 Hardware
3.1 Instruction-level taxonomies
4 Software
5 History of computing
6 Business computing
7 Human factors
8 Computer network
8.1 Wired and wireless computer network
8.2 Computing technology based wireless networking (CbWN)
9 Computer security
10 Data
10.1 Numeric data
10.2 Character data
10.3 Other data topics
11 Mechatronics
12 Classes of computers
13 Companies - current
14 Companies - historic
15 Organizations
15.1 Professional
15.2 Standards bodies
15.3 Open standards
16 See also
17 References



[edit] Definitions
The term computing has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline[1]:

The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, analysis, design, efficiency, implementation, and application. The fundamental question underlying all computing is 'What can be (efficiently) automated?'

However, a broader definition is generally accepted, as illustrated by the 2005 joint report of the ACM and the IEEE, Computing Curricula 2005[2]:

In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; processing, structuring, and managing various kinds of information; doing scientific studies using computers; making computer systems behave intelligently; creating and using communications and entertainment media; finding and gathering information relevant to any particular purpose, and so on. The list is virtually endless, and the possibilities are vast.

The same report also recognizes that the meaning of computing depends on the context:

Computing also has other meanings that are more specific, based on the context in which the term is used. For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession but also as a discipline.

In short, the concept of computing relates to human knowledge and activities which develop and use computer technologies.


[edit] Science and theory
Computer science
Theory of computation
Computational models
DBLP, as of July 2007, now lists over 910 000 bibliographic entries on computer science and several thousand links to the home pages of computer scientists
Scientific computing
Metacomputing

[edit] Hardware
See information processor for a high-level block diagram.

Computer
Computer hardware
Computer Hardware Design
Computer network
Computer system
History of computing hardware

[edit] Instruction-level taxonomies
After the commoditization of memory, attention turned to optimizing CPU performance at the instruction level. Various methods of speeding up the fetch-execute cycle include:

designing instruction set architectures with simpler, faster instructions: RISC as opposed to CISC
Superscalar instruction execution
VLIW architectures, which make parallelism explicit

[edit] Software
Software engineering
Computer programming
Computational
Software patent
Firmware
Operating systems
Application Software
Databases
Geographic information system
Spreadsheet
Word processor
Programming languages
interpreters
compilers
Speech recognition

[edit] History of computing
History of computing hardware from the tally stick to the quantum computer
Punch Card
Unit record equipment
IBM 700/7000 series
IBM 1400 series
System/360
Early IBM disk storage
[[1]]

[edit] Business computing
Accounting software
Computer-aided design
Computer-aided manufacturing
Computer-assisted dispatch
Customer relationship management
Partner Relationship Management
Data warehouse
Decision support system
Electronic data processing
Enterprise resource planning
Geographic information system
Management information system
Material requirements planning
Strategic enterprise management
Supply chain management
Product Lifecycle Management
Utility Computing

[edit] Human factors
Accessible computing
Human-computer interaction
Human-centered computing

[edit] Computer network

[edit] Wired and wireless computer network
Types
Wide Area Network
Metropolitan Area Network
City Area Network
Town Area Network
Village Area Network
Rural Area Network
Local Area Network
Wireless Local Area Network
Mesh networking
Collaborative workspace
Internet
Network Management

[edit] Computing technology based wireless networking (CbWN)
The main of goal of CbWN is to optimize the system performance of the flexible wireless network.

Source coding
Codebook design for side information based transmission techniques such as Precoding
Wyner-Ziv coding for Cooperative wireless communications
Security
Dirty paper coding for coperative multiple antenna or user precoding
Intelligence
Game theory for wireless networking
Cognitive communications
Flexible sectorization, Beamforming and SDMA
Software
Software defined radio (SDR)
Programmable air-interface
Downloadble algorithm: e.g., downloadble codebook for Precoding

[edit] Computer security
Cryptology - cryptography - information theory
Cracking - demon dialing - Hacking - war dialing - war driving
Social engineering - Dumpster diving
Physical security - Black bag job
Computer insecurity
Computer surveillance
defensive programming
malware
security engineering

[edit] Data

[edit] Numeric data
integral data types - bit, byte, etc.
real data types:
Floating point (Single precision, Double precision, etc.)
Fixed point
Rational number
Decimal
Binary-coded decimal (BCD)
Excess-3 BCD (XS-3)
Biquinary-coded decimal
representation: Binary - Octal - Decimal - Hexadecimal (hex)
Computer mathematics - Computer numbering formats -

[edit] Character data
storage: Character - String - Plain text
representation: ASCII - Unicode - Multibyte - EBCDIC (Widecharacter, Multicharacter) - Fieldata - Baudot

[edit] Other data topics
Data compression
Digital signal processing
Image processing
Indexed
Data management
Routing

[edit] Mechatronics
Punch card
Key punch
Unit record equipment

[edit] Classes of computers
Analog computer
Calculator
Desktop computer
Desktop replacement computer
Digital computer
Embedded computer
Home computer
Laptop
Mainframe
Minicomputer
Microcomputer
Personal computer
Portable computer
Personal digital assistant (aka PDA, or Handheld computer)
Programmable logic controller or PLC
Server
Supercomputer
Tablet PC
Video game console
Workstation

[edit] Companies - current
Apple
Avaya
Dell
Fujitsu
Gateway Computers
Groupe Bull
Hewlett-Packard
Hitachi, Ltd.
Intel Corporation
IBM
Lenovo
Microsoft
NEC Corporation
NetCB
Novell
Panasonic
Red Hat
Silicon Graphics
Sun Microsystems
Unisys

[edit] Companies - historic
Acorn, bought by Olivetti
Bendix Corporation
Burroughs Corporation, merged with Sperry to become Unisys
Compaq, bought by Hewlett-Packard
Control Data
Cray
Data General
Digital Equipment Corporation, bought by Compaq, in turn bought by Hewlett-Packard
Digital Research - a software company for the early microprocessor-based computers
English Electric
Ferranti
General Electric, computer division bought by Honeywell, then Bull
Honeywell, computer division bought by Bull
ICL
Leo
Lisp Machines, Inc.
Marconi
Nixdorf Computer, bought by Siemens
Olivetti
Osborne
Packard Bell
Prime Computer
Raytheon
Royal McBee
RCA
Scientific Data Systems, sold to Xerox
Siemens
Sinclair Research, created the ZX Spectrum, ZX80 and ZX81
Sperry, which bought UNIVAC, and later merged with Burroughs to become Unisys
Symbolics
UNIVAC
Varian Data Machines, a division of Varian Associates which was bought by Sperry
Wang

[edit] Organizations

[edit] Professional
Association for Computing Machinery (ACM)
Association for Survey Computing (ASC)
British Computer Society (BCS)
Canadian Information Processing Society (CIPS)
Computer Measurement Group (CMG)
Institute of Electrical and Electronics Engineers (IEEE), in particular the IEEE Computer Society
Institution of Electrical Engineers
International Electrotechnical Commission (IEC)

[edit] Standards bodies
See also: Standardization and Standards organization

International Electrotechnical Commission (IEC)
International Organization for Standardization (ISO)
Institute of Electrical and Electronics Engineers (IEEE)
Internet Engineering Task Force (IETF)
World Wide Web Consortium (W3C)

[edit] Open standards
See also Open standard

Apdex Alliance -- Application Performance Index
Application Response Measurement (ARM)
 楼主| 发表于 2007-9-17 17:15:51 | 显示全部楼层
Artificial intelligence
From Wikipedia, the free encyclopedia
Jump to: navigation, search“AI” redirects here. For other uses of "AI" and "Artificial intelligence", see AI (disambiguation).[url=http://en.wikipedia.org/wiki/Image11_kasparov_breakout.jpg][/url] [url=http://en.wikipedia.org/wiki/Image11_kasparov_breakout.jpg][/url]Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a reigning world champion.
[url=http://en.wikipedia.org/wiki/Imageortal.svg][/url]Artificial intelligence Portal
The modern definition of artificial intelligence (or AI) is "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.[1][/sup][2][/sup][3][/sup] John McCarthy, who coined the term in 1956,[4][/sup] defines it as "the science and engineering of making intelligent machines."[5][/sup] Other names for the field have been proposed, such as computational intelligence,[2][/sup] synthetic intelligence[2][/sup][6][/sup] or computational rationality.[7][/sup] The term artificial intelligence is also used to describe a property of machines or programs: the intelligence that the system demonstrates.
AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimization and logic.[8][/sup] AI research overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.[9][/sup]
Contents [hide]
1 History
2 Mechanisms
2.1 Classifiers
2.2 Conventional AI
2.3 Computational intelligence
3 AI programming languages and styles
4 Research challenges
5 AI in other disciplines
5.1 Philosophy
5.2 Neuro-psychology
5.3 Computer Science
5.4 Business
5.5 Fiction
5.6 Toys and games
6 List of applications
7 See also
8 Notes
9 References
10 Further reading
11 External links



[edit] History
Main articles: History of artificial intelligence and Timeline of artificial intelligence
See also: AI Winter
The field was born at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, would be become the leaders of AI research for many decades.[10] Within a few years, they founded laboratories at MIT, CMU and Stanford that were heavily funded by DARPA.[11] They and their students wrote programs that were, to most people, simply astonishing:[12] computers were solving word problems in algebra, proving logical theorems and speaking English.[13] They would make extraordinary predictions about their work:

1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do"[14]
1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[15]
These predictions, and many like them, would not come true. They had failed to anticipate the difficulty of some of the problems they faced: the lack of raw computer power,[16] the intractable combinatorial explosion of their algorithms,[17] the difficulty of representing commonsense knowledge and doing commonsense reasoning,[18] the incredible difficulty of perception and motion[19] and the failings of logic.[20] In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from congress to fund more productive research, DARPA cut off all undirected, exploratory research in AI. This was the first AI Winter.[21]

In the early 80s, the field would be revived by the commercial success of expert systems and by 1985 the market for AI had reached more than a billion dollars.[22] Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.[23] Minsky was right. Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.[24]

In the 90s AI achieved it's greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.[25] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[26]


[edit] Mechanisms
This article or section is in need of attention from an expert on the subject.
Please help recruit one or improve this article yourself. See the talk page for details.
Please consider using {{Expert-subject}} to associate this request with a WikiProject

Generally speaking AI systems are built around automated inference engines including forward reasoning and backwards reasoning. Based on certain conditions ("if") the system infers certain consequences ("then"). AI applications are generally divided into two types, in terms of consequences: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of most AI systems.

Classifiers make use of pattern recognition for condition matching. In many cases this does not imply absolute, but rather the closest match. Techniques to achieve this divide roughly into two schools of thought: Conventional AI and Computational intelligence (CI).

Conventional AI research focuses on attempts to mimic human intelligence through symbol manipulation and symbolically structured knowledge bases. This approach limits the situations to which conventional AI can be applied. Lotfi Zadeh stated that "we are also in possession of computational tools which are far more effective in the conception and design of intelligent systems than the predicate-logic-based methods which form the core of traditional AI." These techniques, which include fuzzy logic, have become known as soft computing. These often biologically inspired methods stand in contrast to conventional AI and compensate for the shortcomings of symbolicism.[27] These two methodologies have also been labeled as neats vs. scruffies, with neats emphasizing the use of logic and formal representation of knowledge while scruffies take an application-oriented heuristic bottom-up approach.[28]


[edit] Classifiers
Classifiers are functions that can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set.

When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are mainly statistical and machine learning approaches.

A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.

The most widely used classifiers are the neural network, support vector machine, k-nearest neighbor algorithm, Gaussian mixture model, naive Bayes classifier, and decision tree.


[edit] Conventional AI
Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). (Also see semantics.) Methods include:

Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.
Case based reasoning: stores a set of problems and answers in an organized data structure called cases. A case based reasoning system upon being presented with a problem finds a case in its knowledge base that is most closely related to the new problem and presents its solutions as an output with suitable modifications.[29]
Bayesian networks
Behavior based AI: a modular method of building AI systems by hand.

[edit] Computational intelligence
Computational intelligence involves iterative development or learning (e.g., parameter tuning in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Subjects in computational intelligence as defined by IEEE Computational Intelligence Society mainly include:

Neural networks: trainable systems with very strong pattern recognition capabilities.
Fuzzy systems: techniques for reasoning under uncertainty, have been widely used in modern industrial and consumer product control systems; capable of working with concepts such as 'hot', 'cold', 'warm' and 'boiling'.
Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g., genetic algorithms) and swarm intelligence (e.g., ant algorithms).
With hybrid intelligent systems, attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R or CLARION (see References below). It is thought that the human brain uses multiple techniques to both formulate and cross-check results. Thus, systems integration is seen as promising and perhaps necessary for true AI, especially the integration of symbolic and connectionist models (e.g., as advocated by Ron Sun).


[edit] AI programming languages and styles
AI research has led to many advances in programming languages including the first list processing language by Allen Newell et al., Lisp dialects, Planner, Actors, the Scientific Community Metaphor, production systems, and rule-based languages.

GOFAI TEST research is often done in programming languages such as Prolog or Lisp. Matlab and Lush (a numerical dialect of Lisp) include many specialist probabilistic libraries for Bayesian systems. AI research often emphasises rapid development and prototyping, using such interpreted languages to empower rapid command-line testing and experimentation. Real-time systems are however likely to require dedicated optimized software.

Many expert systems are organized collections of if-then such statements, called productions. These can include stochastic elements, producing intrinsic variation, or rely on variation produced in response to a dynamic environment.


[edit] Research challenges

A legged league game from RoboCup 2004 in Lisbon, Portugal.The 800 million-Euro EUREKA Prometheus Project on driverless cars (1987-1995) showed that fast autonomous vehicles, notably those of Ernst Dickmanns and his team, can drive long distances (over 100 miles) in traffic, automatically recognizing and tracking other cars through computer vision, passing slower cars in the left lane. But the challenge of safe door-to-door autonomous driving in arbitrary environments will require additional research.

The DARPA Grand Challenge was a race for a $2 million prize where cars had to drive themselves over a hundred miles of challenging desert terrain without any communication with humans, using GPS, computers and a sophisticated array of sensors. In 2005, the winning vehicles completed all 132 miles of the course in just under seven hours. This was the first in a series of challenges aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned.[30] For November 2007, DARPA introduced the DARPA Urban Challenge. The course will involve a sixty-mile urban area course. Darpa has secured the prize money for the challenge as $2 million for first place, $1 million for second and $500 thousand for third.

A popular challenge amongst AI research groups is the RoboCup and FIRA annual international robot soccer competitions. Hiroaki Kitano has formulated the International RoboCup Federation challenge: "In 2050 a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply [sic] with the official rule [sic] of the FIFA, against the winner of the most recent World Cup."[31]

In the post-dot-com boom era, some search engine websites use a simple form of AI to provide answers to questions entered by the visitor. Questions such as What is the tallest building? can be entered into the search engine's input form, and a list of answers will be returned.



[edit] AI in other disciplines
AI is not only seen in computer science and engineering. It is studied and applied in various different sectors.


[edit] Philosophy
Mind and Brain Portal
Main article: Philosophy of artificial intelligence
The strong AI vs. weak AI debate ("can a man-made artifact be conscious?") is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness cannot be achieved by formal logic systems, while Douglas Hofstadter in G鰀el, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of functionalism. In many strong AI supporters' opinions, artificial consciousness is considered the holy grail of artificial intelligence. Edsger Dijkstra famously opined that the debate had little importance: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

Epistemology, the study of knowledge, also makes contact with AI, as engineers find themselves debating similar questions to philosophers about how best to represent and use knowledge and information (e.g., semantic networks).


[edit] Neuro-psychology
Main article: Cognitive science
Techniques and technologies in AI which have been directly derived from neuroscience include neural networks, Hebbian learning and the relatively new field of Hierarchical Temporal Memory which simulates the architecture of the neocortex.


[edit] Computer Science
Notable examples include the languages LISP and Prolog, which were invented for AI research but are now used for non-AI tasks. Hacker culture first sprang from AI laboratories, in particular the MIT AI Lab, home at various times to such luminaries as John McCarthy, Marvin Minsky, Seymour Papert (who developed Logo there) and Terry Winograd (who abandoned AI after developing SHRDLU).


[edit] Business
Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition (BBC News, 2001).[32] A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information. Many practical applications are dependent on artificial neural networks, networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security, speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining, and e-mail spam filtering.

Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using and producing robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan.[33]


[edit] Fiction
Main article: Artificial intelligence in fiction
In science fiction AI is often portrayed as an upcoming power trying to overthrow human authority usually in the form of futuristic humanoid robots. Alternative plots depict civilizations which chose to be managed by AI or to ban AI completely. Best known examples include films such as The Matrix and Artificial Intelligence: A.I.

The inevitability of world domination by AI is also argued by some science/futurist writers such as Kevin Warwick, Hans Moravec and Isaac Asimov. This concept is also explored in the Uncanny Valley hypothesis.


[edit] Toys and games
The 1990s saw some of the first attempts to mass-produce domestically aimed types of basic Artificial Intelligence for education, or leisure. This prospered greatly with the Digital Revolution, and helped introduce people, especially children, to a life of dealing with various types of AI, specifically in the form of Tamogatchis and Giga Pets, the Internet (example: basic search engine interfaces are one simple form), and the first widely released robot, Furby. A mere year later an improved type of domestic robot was released in the form of Aibo, a robotic dog with intelligent features and autonomy.


[edit] List of applications
Typical problems to which AI methods are applied
Pattern recognition
Optical character recognition
Handwriting recognition
Speech recognition
Face recognition
Artificial Creativity
Computer vision, Virtual reality and Image processing
Diagnosis (artificial intelligence)
Game theory and Strategic planning
Game artificial intelligence and Computer game bot
Natural language processing, Translation and Chatterbots
Non-linear control and Robotics


Other fields in which AI methods are implemented
Artificial life
Automated reasoning
Automation
Biologically-inspired computing
Colloquis
Concept mining
Data mining
Knowledge representation
Semantic Web
E-mail spam filtering
Robotics
Behavior-based robotics
Cognitive robotics
Cybernetics
Developmental robotics
Epigenetic robotics
Evolutionary robotics
Hybrid intelligent system
Intelligent agent
Intelligent control
Litigation


Lists of researchers, projects & publications
List of AI researchers
List of AI projects
List of important AI publications
 楼主| 发表于 2007-9-17 17:21:33 | 显示全部楼层
Information science
Not to be confused with informatics or information theory.
Information science
Portal · History

General Aspects
Information architecture · Information retrieval
Information society · Information access
Philosophy of information

Related fields & subfields
Information technology · Informatics
Classification · Bibliometrics
Information management · Preservation
Categorization · Data modeling
Memory · Computer storage
Intellectual property · Intellectual freedom
Privacy · Censorship
Cultural studies · Information seeking


The Ancient Library of Alexandria, an early form of information storage and retrieval.Look up Information science in
Wiktionary, the free dictionary. Information science Portal
University Portal
Information science (also information studies) is an interdisciplinary science primarily concerned with the collection, classification, manipulation, storage, retrieval and dissemination of information.[1] Information science studies the application and usage of knowledge in organizations, and the interaction between people, organizations and information systems. It is often (mistakenly) considered a branch of computer science. It is actually a broad, interdisciplinary field, incorporating not only aspects of computer science, but also library science, cognitive science, and the social sciences.

Information science focuses on understanding problems from the perspective of the stakeholders involved and then applying information (and other) technology as needed. In other words, it tackles systemic problems first rather than individual pieces of technology within that system. In this respect, information science can be seen as a response to technological determinism, the belief that technology "develops by its own laws, that it realizes its own potential, limited only by the material resources available, and must therefore be regarded as an autonomous system controlling and ultimately permeating all other subsystems of society." [2] Within information science, attention has been given in recent years to human–computer interaction, groupware, the semantic web, value sensitive design, iterative design processes and to the ways people generate, use and find information. Today this field is called the Field of Information, and there are a growing number of Schools and Colleges of Information.

Information science should not be confused with information theory, the study of a particular mathematical concept of information, or with library science, a field related to libraries which uses some of the principles of information science.

Contents
1 Definitions of information science
2 History
2.1 Early beginnings
2.2 19th century
2.3 European documentation
2.4 Transition to modern information science
2.5 Important historical figures
3 Topics in information science
3.1 Bibliometrics
3.2 Data modeling
3.3 Document management
3.4 Groupware
3.5 Human-computer interaction
3.6 Information architecture
3.7 Information ethics
3.8 Information retrieval
3.9 Information society
3.10 Information systems
3.11 Intellectual property
3.12 Knowledge management
3.13 Knowledge engineering
3.14 Semantic web
3.15 Usability engineering
3.16 User-centered design
3.17 XML
4 Research
4.1 Research methods
5 See also
6 References
7 Further reading
8 External links

[edit] Definitions of information science
Some authors treat informatics as a synonym for information science. Because of the rapidly evolving, interdisciplinary nature of informatics, a precise meaning of the term "informatics" is presently difficult to pin down. Regional differences and international terminology complicate the problem. Some people note that much of what is called "Informatics" today was once called "Information Science" at least in fields such as Medical Informatics. However when library scientists began also to use the phrase "Information Science" to refer to their work, the term informatics emerged in the United States as a response by computer scientists to distinguish their work from that of library science, and in Britain as a term for a science of information that studies natural, as well as artificial or engineered, information-processing systems.[citation needed]


History

Early beginnings

Gottfried Wilhelm von Leibniz, a philosopher who made significant contributions to what we now call "information science"Information science, in studying the collection, classification, manipulation, storage, retrieval and dissemination of information has origins in the common stock of human knowledge. Information analysis has been carried out by scholars at least as early as the time of the Abyssinian Empire with the emergence of cultural depositories, what is today known as libraries and archives.[3] Institutionally, information science emerged in the 19th Century along with many other social science disciplines. As a science, however, it finds its institutional roots in the history of science, beginning with publication of the first issues of ‘‘Philosophical Transactions,’’ generally considered the first scientific journal, in 1665 by the Royal Society (London).

The institutionalization of science occurred throughout the 18th Century. In 1731, Benjamin Franklin established the Library Company of Philadelphia, the first “public” library, which quickly expanded beyond the realm of books and became a center of scientific experiment, and which hosted public exhibitions of scientific experiments.[4] Academie de Chirurgia (Paris) published ‘‘Memoires pour les Chirurgiens,’’ generally considered to be the first medical journal, in 1736. The American Philosophical Society, patterned on the Royal Society (London), was founded in Philadelphia in 1743. As numerous other scientific journals and societies are founded, Alois Senefelder develops the concept of lithography for use in mass printing work in Germany in 1796.


19th century

Joseph Marie JacquardBy the 19th Century the first signs of information science emerged as separate and distinct from other sciences and social sciences but in conjunction with communication and computation. In 1801, Joseph Marie Jacquard invented a punched card system to control operations of the cloth weaving loom in France. It was the first use of "memory storage of patterns" system.[5] As chemistry journals emerged throughout the 1820s and 1830s,[6] Charles Babbage developed his "difference engine," the first step towards the modern computer, in 1822 and his "analytical engine” by 1834. By 1843 Richard Hoe developed the rotary press, and in 1844 Samuel Morse sent the first public telegraph message. By 1848 William F. Poole begins the ‘‘Index to Periodical Literature,’’ the first general periodical literature index in the US.

In 1854 George Boole published ‘‘An Investigation into Laws of Thought...,’’ which lays the foundations for Boolean algebra, which is later used in information retrieval.[7] In 1860 a congress is held at Karlsruhe Technische Hochschule to discuss the feasibility of establishing a systematic and rational nomenclature for chemistry. The congress does not reach any conclusive results, but several key participants return home with Stanislao Cannizzaro's outline (1858), which ultimately convinces them of the validity of his scheme for calculating atomic weights.[8]

By 1865 the Smithsonian Institution began a catalog of current scientific papers, which became the ‘‘International Catalogue of Scientific Papers’’ in 1902.[9] The following year the Royal Society began publication of its ‘‘Catalogue of Papers’’ in London. In 1866 Christopher Sholes, Carlos Glidden, and S. W. Soule produced the first practical typewriter. By 1872 Lord Kelvin devised an analogue computer to predict the tides, and by 1875 Frank Baldwin was granted the first US patent for a practical calculating machine that performs four arithmetic functions.[10] Alexander Graham Bell and Thomas Edison invented the phonograph and telephone in 1876 and 1877 respectively, and the American Library Association was founded in Philadelphia. In 1879 ‘‘Index Medicus’’ was first issued by the Library of the Surgeon General, U.S. Army, with John Shaw Billings as librarian, and later the library issues ‘‘Index Catalogue,’’ which achieved an international reputation as the most complete catalog of medical literature.[11]


European documentation

Paul Otlet, a founder of modern information scienceThe discipline of European Documentation, which marks the earliest theoretical foundations of modern information science, emerged in the late part of the 19th Century together with several more scientific indexes whose purpose was to organize scholarly literature. Most information science historians cite Paul Otlet and Henri La Fontaine as the fathers of information science with the founding of the International Institute of Bibliography (IIB) in 1895.[12] However, “information science” as a term is not popularly used in academia until after World War II.[13]

Documentalists emphasized the utilitarian integration of technology and technique toward specific social goals. According to Ronald Day, “As an organized system of techniques and technologies, documentation was understood as a player in the historical development of global organization in modernity – indeed, a major player inasmuch as that organization was dependent on the organization and transmission of information.”[14] Otlet and Lafontaine (who won the Nobel Prize in 1913) not only envisioned later technical innovations but also projected a global vision for information and information technologies that speaks directly to postwar visions of a global “information society.” Otlet and Lafontaine established numerous organizations dedicated to standardization, bibliography, international associations, and consequently, international cooperation. These organizations were fundamental for ensuring international production in commerce, information, communication and modern economic development, and they later found their global form in such institutions as the League of Nations and the United Nations. Otlet designed the Universal Decimal Classification, based on Melville Dewey’s decimal classification system. [15]

Although he lived decades before computers and networks emerged, what he discussed prefigured what ultimately became the World Wide Web. His vision of a great network of knowledge was centered on documents and included the notions of hyperlinks, search engines, remote access, and social networks. (Obviously these notions were described by different names.)

Otlet not only imagined that all the world's knowledge should be interlinked and made available remotely to anyone (what he called an International Network for Universal Documentation), he also proceeded to build a structured document collection that involved standardized paper sheets and cards filed in custom-designed cabinets according to an ever-expanding ontology, an indexing staff which culled information worldwide from as diverse sources as possible, and a commercial information retrieval service which answered written requests by copying relevant information from index cards. Users of this service were even warned if their query was likely to produce more than 50 results per search.[16] By 1937 documentation had formally been institutionalized, as evidenced by the founding of the American Documentation Institute (ADI), later called the American Society for Information Science and Technology.


Transition to modern information science

Vannevar Bush, a famous information scientist, ca. 1940-44With the 1950's came increasing awareness of the potential of automatic devices for literature searching and information storage and retrieval. As these concepts grew in magnitude and potential, so did the variety of information science interests. By the 1960s and 70s, there was a move from batch processing to online modes, from mainframe to mini and micro computers. Additionally, traditional boundaries among disciplines began to fade and many information science scholars joined with library programs. They further made themselves multidisciplinary by incorporating disciplines in the sciences, humanities and social sciences, as well as other professional programs, such as law and medicine in their curriculum. By the 1980's, large databases, such as Grateful Med at the National Library of Medicine, and user-oriented services such as Dialog and Compuserve, were for the first time accessible by individuals from their personal computers. The 1980s also saw the emergence of numerous Special Interest Groups to respond to the changes. By the end of the decade, Special Interest Groups were available involving non-print media, social sciences, energy and the environment, and community information systems. Today, information science largely examines technical bases, social consequences, and theoretical understanding of online databases, widespread use of databases in government, industry, and education, and the development of the Internet and World Wide Web. [17]

See Chronology of Information Science and Technology

Important historical figures
Tim Berners-Lee
John Shaw Billings
George Boole
Suzanne Briet
Michael Buckland
Vannevar Bush
Melville Dewey
Luciano Floridi
Henri La Fontaine
Frederick Kilgour
Gottfried Leibniz
Pierre Levy
Seymour Lubetzky
Wilhelm Ostwald
Paul Otlet
Jesse Shera



Topics in information science

"Knowledge Map of Information Science" from Zins,Chaim, Journal of the American Society for Information Science and Technology, 17 January 2007
Bibliometrics
Bibliometrics is a set of quantitative methods used to study or measure texts and information and is one of the largest research areas within information science.

Bibliometric methods include the journal Impact Factor, a relatively crude but useful method of estimating the impact of the research published within a journal, in comparison to other journals in the same field. Bibliometrics is often used to evaluate or compare the impact of groups of researchers within a field. In addition it is also used to describe the development of fields, particularly new areas of research.



Data modeling
Data modeling is the process of creating a data model by applying a data model theory to create a data model instance. A data model theory is a formal data model description. See database model for a list of current data model theories.

When data modelling, we are structuring and organizing data. These data structures are then typically implemented in a database management system. In addition to defining and organizing the data, data modeling will impose (implicitly or explicitly) constraints or limitations on the data placed within the structure.

Managing large quantities of structured and unstructured data is a primary function of information systems. Data models describe structured data for storage in data management systems such as relational databases. They typically do not describe unstructured data, such as word processing documents, email messages, pictures, digital audio, and video.


Document management
Document management and engineering is a computer system (or set of computer programs) used to track and store electronic documents and/or images of paper documents. Document management systems have some overlap with Content Management Systems, Enterprise Content Management Systems, Digital Asset Management, Document imaging, Workflow systems and Records Management systems.


Groupware
Groupware is software designed to help people involved in a common task achieve their goals. Collaborative software is the basis for computer supported cooperative work.

Such software systems as email, calendaring, text chat, wiki belong in this category. It has been suggested that Metcalfe's law — the more people who use something, the more valuable it becomes — applies to such software.

The more general term social software applies to systems used outside the workplace, for example, online dating services and social networks like Friendster. The study of computer-supported collaboration includes the study of this software and social phenomena associated with it.


Human-computer interaction
Human-computer interaction (HCI), alternatively man-machine interaction (MMI) or computer–human interaction (CHI), is the study of interaction between people (users) and computers. It is an interdisciplinary subject, relating computer science with many other fields of study and research. Interaction between users and computers occurs at the user interface (or simply interface), which includes both software and hardware, for example, general purpose computer peripherals and large-scale mechanical systems such as aircraft and power plants.


Information architecture
Information architecture is the practice of structuring information (knowledge or data) for a purpose. These are often structured according to their context in user interactions or larger databases. The term is most commonly applied to Web development, but also applies to disciplines outside of a strict Web context, such as programming and technical writing. Information architecture is considered an element of user experience design.

The term information architecture describes a specialized skill set which relates to the management of information and employment of informational tools. It has a significant degree of association with the library sciences. Many library schools now teach information architecture.

An alternate definition of information architecture exists within the context of information system design, in which information architecture refers to data modeling and the analysis and design of the information in the system, concentrating on entities and their interdependencies. Data modeling depends on abstraction; the relationships between the pieces of data is of more interest than the particulars of individual records, though cataloging possible values is a common technique. The usability of human-facing systems, and standards compliance of internal ones, are paramount.


Information ethics
Information ethics is the field that investigates the ethical issues arising from the development and application of information technologies. It provides a critical framework for considering moral issues concerning informational privacy, moral agency (e.g. whether artificial agents may be moral), new environmental issues (especially how agents should one behave in the infosphere), problems arising from the life-cycle (creation, collection, recording, distribution, processing, etc.) of information (especially ownership and copyright, digital divide). Information Ethics is therefore strictly related to the fields of computer ethics (Floridi, 1999) and the philosophy of information.

Dilemmas regarding the life of information are becoming increasingly important in a society that is defined as "the information society". Information transmission and literacy are essential concerns in establishing an ethical foundation that promotes fair, equitable, and responsible practices. Information ethics broadly examines issues related to, among other things, ownership, access, privacy, security, and community.

Information technology affects fundamental rights involving copyright protection, intellectual freedom, accountability, and security.

Professional codes offer a basis for making ethical decisions and applying ethical solutions to situations involving information provision and use which reflect an organization’s commitment to responsible information service. Evolving information formats and needs require continual reconsideration of ethical principles and how these codes are applied. Considerations regarding information ethics influence personal decisions, professional practice, and public policy.


Information retrieval
Information retrieval (IR), often studied in conjunction with information storage, is the science of searching for information in documents, searching for documents themselves, searching for metadata which describe documents, or searching within databases, whether relational stand-alone databases or hypertextually-networked databases such as the World Wide Web. There is a common confusion, however, between data retrieval, document retrieval, information retrieval, and text retrieval, and each of these has its own bodies of literature, theory, praxis and technologies. IR is, like most nascent fields, interdisciplinary, based on computer science, mathematics, library science, information science, cognitive psychology, linguistics, statistics, physics.

Automated IR systems are used to reduce information overload. Many universities and public libraries use IR systems to provide access to books, journals, and other documents. IR systems are often related to object and query. Queries are formal statements of information needs that are put to an IR system by the user. An object is an entity which keeps or stores information in a database. User queries are matched to objects stored in the database. A document is, therefore, a data object. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates.


Information society
Information society is a society in which the creation, distribution, diffusion, use, and manipulation of information is a significant economic, political, and cultural activity. The knowledge economy is its economic counterpart whereby wealth is created through the economic exploitation of understanding.

Specific to this kind of society is the central position information technology has for production, economy, and society at large. Information society is seen as the successor to industrial society. Closely related concepts are the post-industrial society (Daniel Bell), post-fordism, post-modern society, knowledge society, Telematic Society, Information Revolution, and network society (Manuel Castells).


Information systems
Information systems is the discipline concerned with the development, use, application and influence of information technologies. An information system is a technologically implemented medium for recording, storing, and disseminating linguistic expressions, as well as for drawing conclusions from such expressions.

The technology used for implementing information systems by no means has to be computer technology. A notebook in which one lists certain items of interest is, according to that definition, an information system. Likewise, there are computer applications that do not comply with this definition of information systems. Embedded systems are an example. A computer application that is integrated into clothing or even the human body does not generally deal with linguistic expressions. One could, however, try to generalize Langefors' definition so as to cover more recent developments.


Intellectual property
Intellectual property (IP) is a disputed umbrella term for various legal entitlements which attach to certain names, written and recorded media, and inventions. The holders of these legal entitlements are generally entitled to exercise various exclusive rights in relation to the subject matter of the IP. The term intellectual property links the idea that this subject matter is the product of the mind or the intellect together with the political and economical notion of property. The close linking of these two ideas is a matter of some controversy. It is criticised as "a fad" by Mark Lemley of Stanford Law School and by Richard Stallman of the Free Software Foundation as an "overgeneralization" and "at best a catch-all to lump together disparate laws".[18]

Intellectual property laws and enforcement vary widely from jurisdiction to jurisdiction. There are inter-governmental efforts to harmonise them through international treaties such as the 1994 World Trade Organization (WTO) Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs), while other treaties may facilitate registration in more than one jurisdiction at a time. Enforcement of copyright, disagreements over medical and software patents, and the dispute regarding the nature of "intellectual property" as a cohesive notion[18] have so far prevented the emergence of a cohesive international system.


Knowledge management
Knowledge management comprises a range of practices used by organisations to identify, create, represent, and distribute knowledge for reuse, awareness, and learning across the organisations.

Knowledge Management programs are typically tied to organisational objectives and are intended to lead to the achievement of specific outcomes, such as shared intelligence, improved performance, competitive advantage, or higher levels of innovation.

Knowledge transfer (one aspect of Knowledge Management) has always existed in one form or another. Examples include on-the-job peer discussions, formal apprenticeship, corporate libraries, professional training, and mentoring programs. However, since the late twentieth century, additional technology has been applied to this task, such as


Knowledge engineering
Knowledge engineering (KE), often studied in conjunction with knowledge management, refers to the building, maintaining and development of knowledge-based systems. It has a great deal in common with software engineering, and is related to many computer science domains such as artificial intelligence, databases, data mining, expert systems, decision support systems and geographic information systems. Knowledge engineering is also related to mathematical logic, as well as strongly involved in cognitive science and socio-cognitive engineering where the knowledge is produced by socio-cognitive aggregates (mainly humans) and is structured according to our understanding of how human reasoning and logic works.


Semantic web
Semantic Web is an evolving extension of the World Wide Web in which web content can be expressed not only in natural language, but also in a form that can be understood, interpreted and used by software agents, thus permitting them to find, share and integrate information more easily.[19] It derives from W3C director Tim Berners-Lee's vision of the Web as a universal medium for data, information, and knowledge exchange.

At its core, the Semantic Web comprises a philosophy,[20] a set of design principles,[21] collaborative working groups, and a variety of enabling technologies. Some elements of the Semantic Web are expressed as prospective future possibilities that have yet to be implemented or realized.[22] Other elements of the Semantic Web are expressed in formal specifications.[23] Some of these include Resource Description Framework (RDF), a variety of data interchange formats (e.g RDF/XML, N3, Turtle, and notations such as RDF Schema (RDFS) and the Web Ontology Language (OWL). All of which are intended to formally describe concepts, terms, and relationships within a given problem domain.


Usability engineering
Usability engineering is a subset of human factors that is specific to computer science and is concerned with the question of how to design software that is easy to use. It is closely related to the field of human-computer interaction and industrial design. The term "usability engineering" (UE) (in contrast to other names of the discipline, like interaction design or user experience design) tends to describe a pragmatic approach to user interface design which emphasizes empirical methods and operational definitions of user requirements for tools. Extending as far as International Standards Organization-approved definitions usability is considered a context-dependent agreement of the effectiveness, efficiency and satisfaction with which specific users should be able to perform tasks. Advocates of this approach engage in task analysis, then prototype interface designs and conduct usability tests. On the basis of such tests, the technology is (ideally) re-designed or (occasionally) the operational targets for user performance are revised.


User-centered design
User-centered design is a design philosophy and a process in which the needs, wants, and limitations of the end user of an interface or document are given extensive attention at each stage of the design process. User-centered design can be characterized as a multi-stage problem solving process that not only requires designers to analyze and foresee how users are likely to use an interface, but to test the validity of their assumptions with regards to user behaviour in real world tests with actual users. Such testing is necessary as it is often very difficult for the designers of an interface to understand intuitively what a first-time user of their design experiences, and what each user's learning curve may look like.

The chief difference from other interface design philosophies is that user-centered design tries to optimize the user interface around how people can, want, or need to work, rather than forcing the users to change how they work to accommodate the system or function.


[edit] Research
Many universities have entire schools or departments devoted to the study of information science, while numerous information science scholars can be found in disciplines such as communication, law, sociology, computer science, and library science (see List of I-Schools).
 楼主| 发表于 2007-9-17 17:28:34 | 显示全部楼层
Information retrieval
From Wikipedia, the free encyclopedia
Jump to: navigation, searchInformation retrieval (IR) is the science of searching for information in documents, searching for documents themselves, searching for metadata which describe documents, or searching within databases, whether relational stand-alone databases or hypertextually-networked databases such as the World Wide Web. There is a common confusion, however, between data retrieval, document retrieval, information retrieval, and text retrieval, and each of these has its own bodies of literature, theory, praxis and technologies. IR is interdisciplinary, based on computer science, mathematics, library science, information science, cognitive psychology, linguistics, statistics and physics.
Automated IR systems are used to reduce information overload. Many universities and public libraries use IR systems to provide access to books, journals, and other documents. IR systems are often related to object and query. Queries are formal statements of information needs that are put to an IR system by the user. An object is an entity which keeps or stores information in a database. User queries are matched to objects stored in the database. A document is, therefore, a data object. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates.
In 1992 the US Department of Defense, along with the National Institute of Standards and Technology (NIST), cosponsored the Text Retrieval Conference (TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for such a huge evaluation of text retrieval methodologies.
Web search engines such as Google, Yahoo search or Live.com are the most visible IR applications.

Contents
1 Performance measures

Performance measures
There are several measures on the performance of an information retrieval system. The measures rely on a collection of documents and a query for which the relevancy of the documents is known. All common measures described here assume a ground truth notion of relevancy: every document is known to be either relevant or non-relevant to a particular query. In practice queries may be ill-posed and there may be different shades of relevancy.

Precision
The proportion of retrieved and relevant documents to all the documents retrieved:
In binary classification, precision is analogous to positive predictive value. Precision takes all retrieved documents into account. It can also be evaluated at a given cut-off rank, considering only the topmost results returned by the system. This measure is called precision at n or P@n.
Note that the meaning and usage of "precision" in the field of Information Retrieval differs from the definition of accuracy and precision within other branches of science and technology.

[edit] Recall
The proportion of relevant documents that are retrieved, out of all relevant documents available:
In binary classification, recall is called sensitivity.
It is trivial to achieve recall of 100% by returning all documents in response to any query. Therefore recall alone is not enough but one needs to measure the number of non-relevant document also, for example by computing the precision.

[edit] Fall-Out
The proportion of non-relevant documents that are retrieved, out of all non-relevant documents available:

[edit] F-measure
The weighted harmonic mean of precision and recall, the traditional F-measure or balanced F-score is:
This is also known as the F[sub]1[/sub] measure, because recall and precision are evenly weighted.
The general formula for non-negative real α is:
Two other commonly used F measures are the F[sub]2[/sub] measure, which weights recall twice as much as precision, and the F[sub]0.5[/sub] measure, which weights precision twice as much as recall.

[edit] Average precision
The precision and recall are based on the whole list of documents returned by the system. Average precision emphasizes returning more relevant documents earlier. It is average of precisions computed after truncating the list after each of the relevant documents in turn:
where r is the rank, N the number retrieved, rel() a binary function on the relevance of a given rank, and P() precision at a given cut-off rank.
If there are several queries with known relevancies available, the mean average precision is the mean value of the average precisions computed for each of the queries separately.

[edit] Model types
categorization of IR-models (translated from German entry, original source Dominik Kuropka)For successful IR, it is necessary to represent the documents in some way. There are a number of models for this purpose. They can be categorized according to two dimensions like those shown in the figure on the right: the mathematical basis and the properties of the model. (translated from German entry, original source Dominik Kuropka)

[edit] First dimension: mathematical basis
    [li]Algebraic Models represent documents and queries usually as vectors, matrices or tuples. Those vectors, matrices or tuples are transformed by the use of a finite number of algebraic operations to a one-dimensional similarity measurement. [/li]

[edit] Second dimension: properties of the model
    [li]Models without term-interdependencies treat different terms/words as not interdependent. This fact is usually represented in vector space models by the orthogonality assumption of term vectors or in probabilistic models by an independency assumption for term variables. [/li]
    [li]Models with immanent term interdependencies allow a representation of interdependencies between terms. However the degree of the interdependency between two terms is defined by the model itself. It is usually directly or indirectly derived (e.g. by dimensional reduction) from the co-occurrence of those terms in the whole set of documents. [/li]
    [li]Models with transcendent term interdependencies allow a representation of interdependencies between terms, but they do not allege how the interdependency between two terms is defined. They relay an external source for the degree of interdependency between two terms. (For example a human or sophisticated algorithms.) [/li]

[edit] Timeline
    [li]1890: Hollerith tabulating machines were used to analyze the US census. (Herman Hollerith). [/li][li]1945: Vannevar Bush's As We May Think appeared in Atlantic Monthly [/li][li]Late 1940s: The US military confronted problems of indexing and retrieval of wartime scientific research documents captured from Germans. [/li][li]1947: Hans Peter Luhn (research engineer at IBM since 1941) began work on a mechanized, punch card based system for searching chemical compounds. [/li][li]1950: The term "information retrieval" may have been coined by Calvin Mooers. [/li][li]1950s: Growing concern in the US for a "science gap" with the SSSR motivated, encouraged funding, and provided a backdrop for mechanized literature searching systems (Allen Kent et al) and the invention of citation indexing (Eugene Garfield). [/li][li]1955: Allen Kent joined Case Western Reserve University, and eventually becomes associate director of the Center for Documentation and Communications Research. [/li][li]1958: International Conference on Scientific Information Washington DC included consideration of IR systems as a solution to problems identified. See: Proceedings of the International Conference on Scientific Information, 1958 (National Academy of Sciences, Washington, DC, 1959) [/li][li]1959: Hans Peter Luhn published "Auto-encoding of documents for information retrieval." [/li][li]1960: Melvin Earl (Bill) Maron and J. L. Kuhns published "On relevance, probabilistic indexing, and information retrieval" in Journal of the ACM 7(3):216-244, July 1960. [/li][li]Early 1960s: Gerard Salton began work on IR at Harvard, later moved to Cornell. [/li][li]1962: Cyril W. Cleverdon published early findings of the Cranfield studies, developing a model for IR system evaluation. See: Cyril W. Cleverdon, "Report on the Testing and Analysis of an Investigation into the Comparative Efficiency of Indexing Systems". Cranfield Coll. of Aeronautics, Cranfield, England, 1962. [/li][li]1962: Kent published Information Analysis and Retrieval [/li][li]1963: Weinberg report "Science, Government and Information" gave a full articulation of the idea of a "crisis of scientific information." The report was named after Dr. Alvin Weinberg. [/li][li]1963: Joseph Becker and Robert Hayes published text on information retrieval. Becker, Joseph; Hayes, Robert Mayo. Information storage and retrieval: tools, elements, theories. New York, Wiley (1963). [/li][li]1964: Karen Sp鋜ck Jones finished her thesis at Cambridge, Synonymy and Semantic Classification, and continued work on computational linguistics as it applies to IR [/li][li]1964: The National Bureau of Standards sponsored a symposium titled "Statistical Association Methods for Mechanized Documentation." Several highly significant papers, including G. Salton's first published reference (we believe) to the SMART system. [/li][li]Mid-1960s: National Library of Medicine developed MEDLARS Medical Literature Analysis and Retrieval System, the first major machine-readable database and batch retrieval system [/li][li]Mid-1960s: Project Intrex at MIT [/li][li]1965: J. C. R. Licklider published Libraries of the Future [/li][li]1966: Don Swanson was involved in studies at University of Chicago on Requirements for Future Catalogs [/li][li]1968: Gerard Salton published Automatic Information Organization and Retrieval. [/li][li]1968: J. W. Sammon's RADC Tech report "Some Mathematics of Information Storage and Retrieval..." outlined the vector model. [/li][li]1969: Sammon's "A nonlinear mapping for data structure analysis" (IEEE Transactions on Computers) was the first proposal for visualization interface to an IR system. [/li][li]Late 1960s: F. W. Lancaster completed evaluation studies of the MEDLARS system and published the first edition of his text on information retrieval [/li][li]Early 1970s: first online systems--NLM's AIM-TWX, MEDLINE; Lockheed's Dialog; SDC's ORBIT [/li][li]Early 1970s: Theodor Nelson promoting concept of hypertext, published Computer Lib/Dream Machines [/li][li]1971: N. Jardine and C. J. Van Rijsbergen published "The use of hierarchic clustering in information retrieval", which articulated the "cluster hypothesis." (Information Storage and Retrieval, 7(5), pp. 217-240, Dec 1971) [/li][li]1975: Three highly influential publications by Salton fully articulated his vector processing framework and term discrimination model:
      [li]A Theory of Indexing (Society for Industrial and Applied Mathematics) [/li][li]"A theory of term importance in automatic text analysis", (JASIS v. 26) [/li][li]"A vector space model for automatic indexing", (CACM 18:11) [/li]
    [/li][li]1978: The First ACM SIGIR conference. [/li][li]1979: C. J. Van Rijsbergen published Information Retrieval (Butterworths). Heavy emphasis on probabilistic models. [/li][li]1980: First international ACM SIGIR conference, joint with British Computer Society IR group in Cambridge [/li][li]1982: Belkin, Oddy, and Brooks proposed the ASK (Anomalous State of Knowledge) viewpoint for information retrieval. This was an important concept, though their automated analysis tool proved ultimately disappointing. [/li][li]1983: Salton (and M. McGill) published Introduction to Modern Information Retrieval (McGraw-Hill), with heavy emphasis on vector models. [/li][li]Mid-1980s: Efforts to develop end user versions of commercial IR systems. [/li][li]1985-1993: Key papers on and experimental systems for visualization interfaces. [/li][li]Work by D. B. Crouch, Robert R. Korfhage, M. Chalmers, A. Spoerri and others. [/li][li]1989: First World Wide Web proposals by Tim Berners-Lee at CERN. [/li][li]1992: First TREC conference. [/li][li]1997: Publication of Korfhage's Information Retrieval with emphasis on visualization and multi-reference point systems. [/li][li]Late 1990s: Web search engine implementation of many features formerly found only in experimental IR systems [/li]

[edit] Open source systems
    [li]DataparkSearch, search engine written in C, GPL [/li][li]Egothor high-performance, full-featured text search engine written entirely in Java [/li][li]Glimpse and Webglimpse advanced site search software [/li][li]ht://dig Open source web crawling software [/li][li]Lemur Language Modelling IR Toolkit [/li][li]Lucene [5] Apache Jakarta project [/li][li]MG full-text retrieval system Now maintained by the Greenstone Digital Library Software Project [/li][li]Smart Early IR engine from Cornell University [/li][li]Sphinx [6] Open-source (GPL) SQL full-text search engine [/li][li]Terrier TERabyte RetrIEveR, Information Retrieval Platform, written in Java [/li][li]Wumpus multi-user information retrieval system [/li][li]Xapian Open source IR platform based on Muscat [/li][li]Zebra GPL structured text/XML/MARC boolean search IR engine supporting Z39.50 and Web Services [/li][li]Zettair, compact and fast search engine written in C, able to handle large amounts of text [/li]

[edit] Other retrieval tools
    [li]ASPseek [/li][li]iHOP Information retrieval system for the biomedical domain [/li][li]MEDIE An intelligent search engine, retrieving biomedical events from Medline. [/li][li]EBIMed Information retrieval (and extraction) system over Medline [/li][li]Info-PubMed Protein interaction database with 200,000 gene/protein names mined from Medline. [/li][li]Fluid Dynamics Search Engine (FDSE) A search engine written in Perl, freeware and shareware versions are available [/li][li]GalaTex XQuery Full-Text Search (XML query text search) [/li][li]Information Storage and Retrieval Using Mumps (Online GPL Text) [/li][li]mnoGoSearch written in C, it can index web multilingual sites and many databases types. [/li][li]Sphinx Free SQL full-text search engine [/li][li]BioSpider Free metabolite/drug/protein information retrieval system (used in the annotation of DrugBank and the Human Metabolome Database) [/li]

[edit] Research Groups (in no particular order)
发表于 2007-9-24 08:58:27 | 显示全部楼层
LZ怎么上Wiki的?用代理么?
 楼主| 发表于 2007-9-28 13:52:04 | 显示全部楼层
引用第9楼hnuer于2007-09-24 08:58发表的 :
LZ怎么上Wiki的?用代理么?

正解!

现在的浏览器都带代理插件了,比如world和maxthon,所以上wiki也还算方便
发表于 2007-10-4 09:57:04 | 显示全部楼层
Where is Computer Aided Geometric Design????
您需要登录后才可以回帖 登录 | 注册

本版积分规则

关闭

每日推荐上一条 /1 下一条

小黑屋|手机版|湖南大学望麓自卑校园传媒 ( 湘ICP备14014987号 )

GMT+8, 2024-11-27 14:35 , Processed in 0.444638 second(s), 22 queries , Gzip On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表