Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
The scientists and engineers of today are relentless in their continuing study and analysis of the world about us from the microcosm to the macrocosm. A central purpose of this study is to gain sufficient scientific information and insight to enable the development of both representative and useful models of the superabundance of physical processes that surround us. The engineers need these models and the associated insight in order to build the information processing systems and control systems that comprise these new and emerging technologies. Much of the early modeling work that has been done on these systems has been based on the linear time-invariant system theory and its extensive use of Fourier transform theory for both continuous and discrete systems and signals. However many of the signals arising in nature and real systems are neither stationary nor linear but tend to be concentrated in both time and frequency. Hence a new methodology is needed to take these factors properly into account.
Recent Advances in Robot Learning contains seven papers on robot learning written by leading researchers in the field. As the selection of papers illustrates, the field of robot learning is both active and diverse. A variety of machine learning methods, ranging from inductive logic programming to reinforcement learning, is being applied to many subproblems in robot perception and control, often with objectives as diverse as parameter calibration and concept formulation. While no unified robot learning framework has yet emerged to cover the variety of problems and approaches described in these papers and other publications, a clear set of shared issues underlies many robot learning problems. Machine learning, when applied to robotics, is situated: it is embedded into a real-world system that tightly integrates perception, decision making and execution. Since robot learning involves decision making, there is an inherent active learning issue. Robotic domains are usually complex, yet the expense of using actual robotic hardware often prohibits the collection of large amounts of training data. Most robotic systems are real-time systems. Decisions must be made within critical or practical time constraints. These characteristics present challenges and constraints to the learning system. Since these characteristics are shared by other important real-world application domains, robotics is a highly attractive area for research on machine learning. On the other hand, machine learning is also highly attractive to robotics. There is a great variety of open problems in robotics that defy a static, hand-coded solution. Recent Advances in Robot Learning is an edited volume of peer-reviewed original research comprising seven invited contributions by leading researchers. This research work has also been published as a special issue of Machine Learning (Volume 23, Numbers 2 and 3).
6. 5 137 7 Performance of BTCs and 139 their Applications 7. 1 Introduction 139 7. 2 Some Results from the Literatures 139 7. 3 Applications of Block Turbo Codes. 142 7. 3. 1 Broadband Wireless Access Standard 144 7. 3. 2 Advanced Hardware Architectures (AHA) 145 7. 3. 3 COMTECH EF DATA 147 7. 3. 4 Turbo Concept 149 7. 3. 5 Paradise Data Com 150 Summary 7. 4 151 8 Implementation Issues 153 8. 1 Fixed-point Implementation of Turbo Decoder 153 8. 1. 1 Input Data Quantization for DVB-RCS Turbo Codes 155 8. 1. 2 Input Data Quantization for BTC 157 8. 2 The Effect of Correction Term in Max-Log-MAP Algorithm 159 8. 3 Effect of Channel Impairment on Turbo Codes 163 8. 3. 1 System Model for the Investigation of Channel Impairments 163 8. 3. 2 Channel SNR Mismatch 164 8. 3. 2. 1 Simulation Results 165 8. 3. 3 Carrier Phase Recovery 170 8. 3. 3. 1 The Effect of Phase Offset on the Performance of RM Turbo Codes 170 8. 3. 3. 2 The Effect of Preamble Size on the Performance of RM Turbo Codes 170 8. 3. 3. 3 Simulation Results 170 8. 4 Hardware Implementation of Turbo Codes 171 8. 5 Summary 175 9 177 Low Density Parity Check Codes 9. 1 Gallager Codes: Regular Binary LDPC Codes 177 9. 2 Random Block Codes 178 9. 2. 1 Generator Matrix 179 9. 2.
Media processing applications, such as three-dimensional graphics, video compression, and image processing, currently demand 10-100 billion operations per second of sustained computation. Fortunately, hundreds of arithmetic units can easily fit on a modestly sized 1cm2 chip in modern VLSI. The challenge is to provide these arithmetic units with enough data to enable them to meet the computation demands of media processing applications. Conventional storage hierarchies, which frequently include caches, are unable to bridge the data bandwidth gap between modern DRAM and tens to hundreds of arithmetic units. A data bandwidth hierarchy, however, can bridge this gap by scaling the provided bandwidth across the levels of the storage hierarchy. The stream programming model enables media processing applications to exploit a data bandwidth hierarchy effectively. Media processing applications can naturally be expressed as a sequence of computation kernels that operate on data streams. This programming model exposes the locality and concurrency inherent in these applications and enables them to be mapped efficiently to the data bandwidth hierarchy. Stream programs are able to utilize inexperience local data bandwidth when possible and consume expensive global data bandwidth only when necessary. Stream Processor Architecture presents the architecture of the Imagine streaming media processor, which delivers a peak performance of 20 billion floating-point operations per second. Imagine efficiently supports 48 arithmetic units with a three-tiered data bandwidth hierarchy. At the base of the hierarchy, the streaming memory system employs memory access scheduling to maximize the sustained bandwidth of external DRAM. At the center of the hierarchy, the global stream register file enables streams of data to be recirculated directly from one computation kernel to the next without returning data to memory. Finally, local distributed register files that directly feed the arithmetic units enable temporary data to be stored locally so that it does not need to consume costly global register bandwidth. The bandwidth hierarchy enables Imagine to achieve up to 96% of the performance of a stream processor with infinite bandwidth from memory and the global register file.
Fatigue Life Prediction of Solder Joints in Electronic Packages with ANSYS® describes the method in great detail starting from the theoretical basis. The reader is supplied with an add-on software package to ANSYS® that is designed for solder joint fatigue reliability analysis of electronic packages. Specific steps of the analysis method are discussed through examples without leaving any room for confusion. The add-on package along with the examples make it possible for an engineer with a working knowledge of ANSYS® to perform solder joint reliability analysis. Fatigue Life Prediction of Solder Joints in Electronic Packages with ANSYS® allows the engineers to conduct fatigue reliability analysis of solder joints in electronic packages.
In recent years, there has been considerable interest in highly integrated, low power, portable wireless devices. This monograph focuses on the problem of low power GFSK/GMSK modulation and presents an architectural approach for improved performance. Including several valuable tools for the practicing engineer.
Speech coding has been an ongoing area of research for several decades, yet the level of activity and interest in this area has expanded dramatically in the last several years. Important advances in algorithmic techniques for speech coding have recently emerged and excellent progress has been achieved in producing high quality speech at bit rates as low as 4.8 kb/s. Although the complexity of the newer more sophisticated algorithms greatly exceeds that of older methods (such as ADPCM), today's powerful programmable signal processor chips allow rapid technology transfer from research to product development and permit many new cost-effective applications of speech coding. In particular, low bit rate voice technology is converging with the needs of the rapidly evolving digital telecom munication networks. The IEEE Workshop on Speech Coding for Telecommunications was held in Vancouver, British Columbia, Canada, from September 5 to 8, 1989. The objective of the workshop was to provide a forum for discussion of recent developments and future directions in speech coding. The workshop attracted over 130 researchers from several countries and its technical program included 51 papers.
This book is concerned with wafer fabrication and the factories that manufacture microprocessors and other integrated circuits. With the invention of the transistor in 1947, the world as we knew it changed. The transistor led to the microprocessor, and the microprocessor, the guts of the modern computer, has created an epoch of virtually unlimited information processing. The electronics and computer revolution has brought about, for better or worse, a new way of life. This revolution could not have occurred without wafer fabrication, and its associated processing technologies. A microprocessor is fabricated via a lengthy, highly-complex sequence of chemical processes. The success of modern chip manufacturing is a miracle of technology and a tribute to the hundreds of engineers who have contributed to its development. This book will delineate the magnitude of the accomplishment, and present methods to analyze and predict the performance of the factories that make the chips. The set of topics covered juxtaposes several disciplines of engineering. A primary subject is the chemical engineering aspects of the electronics industry, an industry typically thought to be strictly an electrical engineer's playground. The book also delves into issues of manufacturing, operations performance, economics, and the dynamics of material movement, topics often considered the domain of industrial engineering and operations research. Hopefully, we have provided in this work a comprehensive treatment of both the technology and the factories of wafer fabrication. Novel features of these factories include long process flows and a dominance of processing over operational issues.
Lo, soul! seest thou not God's purpose from the first? The earth to be spann'd, connected by net-work From Passage to India! Walt Whitman, "Leaves of Grass", 1900. The Internet is growing at a tremendous rate today. New services, such as telephony and multimedia, are being added to the pure data-delivery framework of yesterday. Such high demands on capacity could lead to a "bandwidth-crunch" at the core wide-area network resulting in degra dation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end-user to overcome the In ternet's well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (such as twisted-pair and cable) to optical fibers - in the wide area, in the metropolitan area, and even in the local area set tings. In order to exploit the immense bandwidth potential of the optical fiber, interesting multiplexing techniques have been developed over the years. Wavelength division multiplexing (WDM) is such a promising tech nique in which multiple channels are operated along a single fiber si multaneously, each on a different wavelength. These channels can be independently modulated to accommodate dissimilar bit rates and data formats, if so desired. Thus, WDM carves up the huge bandwidth of an optical fiber into channels whose bandwidths (1-10 Gbps) are compati ble with peak electronic processing speed.
Dependable Network Computing provides insights into various problems facing millions of global users resulting from the `internet revolution'. It covers real-time problems involving software, servers, and large-scale storage systems with adaptive fault-tolerant routing and dynamic reconfiguration techniques. Also included is material on routing protocols, QoS, and dead- and live-lock free related issues. All chapters are written by leading specialists in their respective fields. Dependable Network Computing provides useful information for scientists, researchers, and application developers building networks based on commercially off-the-shelf components.
Real-time systems are defined as those for which correctness depends not only on the logical properties of the produced results, but also on the temporal properties of these results. In a database, real-time means that in addition to typical logical consistency constraints, such as a constraint on a data item's value, there are constraints on when transactions execute and on the `freshness' of the data transactions access. The challenges and tradeoffs faced by the designers of real-time database systems are quite different from those faced by the designers of general-purpose database systems. To achieve the fundamental requirements of timeliness and predictability, not only do conventional methods for scheduling and transaction management have to be redesigned, but also new concepts that have not been considered in conventional database systems or in real-time systems need to be added. Real-Time Database and Information Systems: Research Advances is devoted to new techniques for scheduling of transactions, concurrency management, transaction logging, database languages, and new distributed database architectures. Real-Time Database and Information Systems: Research Advances is primarily intended for practicing engineers and researchers working in the growing area of real-time database and information retrieval systems. For practitioners, the book will provide a much needed bridge for technology transfer and continued education. For researchers, the book will provide a comprehensive reference for well-established results. The book can also be used in a senior or graduate level course on real-time systems, real-time database systems, and database systems, or closely related courses.
This is the first book on the subject of multi-standard wireless receivers. It covers both the analysis and design aspects of CMOS radio receivers, with primary focus on receivers for mobile terminals. The subject of multi-standard data converter design for base stations is also covered.
Purpose of the Book This book presents an approach to improve the standard object-oriented pro gramming model. The proposal is aimed at supporting a larger range of incre mental behavior variations and thus promises to be more effective in mastering the complexity of today's software. The ability of dealing with the evolutionary nature of software is one of main merits of object-oriented data abstraction and inheritance. Object-orientation allows to organize software in a structured way by separating the description of different kinds of an abstract data type into different classes and loosely connecting them by the inheritance hierarchy. Due to this separation, the soft ware becomes free of conditional logics previously needed for distinguishing between different kinds of abstractions and can thus more easily be incremen tally extended to support new kinds of abstractions. In other words, classes and inheritance are means to properly model variations of behavior related to the existence of different kinds of an abstract data type. The support for extensi bility and reuse with respect to such kind-specific behavior variations is among the main reasons for the increasing popularity of object-oriented programming in the last two decades. However, this popularity does not prevent us from questioning the real effec tiveness of current object-oriented techniques in supporting incremental vari ations. In fact, this popularity makes a critical investigation of the variations that can actually be performed incrementally even more important.
Parallel Numerical Computations with Applications contains selected edited papers presented at the 1998 Frontiers of Parallel Numerical Computations and Applications Workshop, along with invited papers from leading researchers around the world. These papers cover a broad spectrum of topics on parallel numerical computation with applications; such as advanced parallel numerical and computational optimization methods, novel parallel computing techniques, numerical fluid mechanics, and other applications related to material sciences, signal and image processing, semiconductor technology, and electronic circuits and systems design. This state-of-the-art volume will be an up-to-date resource for researchers in the areas of parallel and distributed computing.
Information is always required by organizations of coastal states about the movements, identities and intentions of vessels sailing in the waters of interest to them, which may be coastal waters, straits, inland waterways, rivers, lakes or open seas. This interest may stem from defense requirements or from needs for the protection of off-shore resources, enhanced search and rescue services, deterrence of smuggling, drug trafficking and other illegal activities and/or for providing vessel traffic services for safe and efficient navigation and protection of the environment. To meet these needs it is necessary to have a well designed maritime surveillance and control system capable of tracking ships and providing other types of information required by a variety of user groups ranging from port authorities, shipping companies, marine exchanges to governments and the military. Principles of Integrated Maritime Surveillance Systems will be of vital interest to anyone responsible for the design, implementation or provision of a well designed maritime surveillance and control system capable of tracking ships and providing navigational and other types of information required for safe navigation and efficient commercial operation. Principles of Integrated Maritime Surveillance Systems is therefore essential to a variety of user groups ranging from port authorities to shipping companies and marine exchanges as well as civil governments and the military.
Neural Network Parallel Computing is the first book available to the professional market on neural network computing for optimization problems. This introductory book is not only for the novice reader, but for experts in a variety of areas including parallel computing, neural network computing, computer science, communications, graph theory, computer aided design for VLSI circuits, molecular biology, management science, and operations research. The goal of the book is to facilitate an understanding as to the uses of neural network models in real-world applications. Neural Network Parallel Computing presents a major breakthrough in science and a variety of engineering fields. The computational power of neural network computing is demonstrated by solving numerous problems such as N-queen, crossbar switch scheduling, four-coloring and k-colorability, graph planarization and channel routing, RNA secondary structure prediction, knight's tour, spare allocation, sorting and searching, and tiling. Neural Network Parallel Computing is an excellent reference for researchers in all areas covered by the book. Furthermore, the text may be used in a senior or graduate level course on the topic.
A color time-varying image can be described as a three-dimensional vector (representing the colors in an appropriate color space) defined on a three-dimensional spatiotemporal space. In conventional analog television a one-dimensional signal suitable for transmission over a communication channel is obtained by sampling the scene in the vertical and tem poral directions and by frequency-multiplexing the luminance and chrominance informa tion. In digital processing and transmission systems, sampling is applied in the horizontal direction, too, on a signal which has been already scanned in the vertical and temporal directions or directly in three dimensions when using some solid-state sensor. As a conse quence, in recent years it has been considered quite natural to assess the potential advan tages arising from an entire multidimensional approach to the processing of video signals. As a simple but significant example, a composite color video signal, such as the conven tional PAL or NTSC signal, possesses a three-dimensional spectrum which, by using suitable three-dimensional filters, permits horizontal sampling at a rate which is less than that re quired for correctly sampling the equivalent one-dimensional signal. More recently it has been widely recognized that the improvement of the picture quality in current and advanced television systems requires well-chosen signal processing algorithms which are multidimen sional in nature within the demanding constraints of a real-time implementation.
Since the early 1980s, CAD frameworks have received a great deal of attention, both in the research community and in the commercial arena. It is generally agreed that CAD framework technology promises much: advanced CAD frameworks can turn collections of individual tools into effective and user-friendly design environments. But how can this promise be fulfilled? CAD Frameworks: Principles and Architecture describes the design and construction of CAD frameworks. It presents principles for building integrated design environments and shows how a CAD framework can be based on these principles. It derives the architecture of a CAD framework in a systematic way, using well-defined primitives for representation. This architecture defines how the many different framework sub-topics, ranging from concurrency control to design flow management, relate to each other and come together into an overall system. The origin of this work is the research and development performed in the context of the Nelsis CAD Framework, which has been a working system for well over eight years, gaining functionality while evolving from one release to the next. The principles and concepts presented in this book have been field-tested in the Nelsis CAD Framework. CAD Frameworks: Principles and Architecture is primarily intended for EDA professionals, both in industry and in academia, but is also valuable outside the domain of electronic design. Many of the principles and concepts presented are also applicable to other design-oriented application domains, such as mechanical design or computer-aided software engineering (CASE). It is thus a valuable reference for all those involved in computer-aided design.
Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.
Multithreaded Processor Design takes the unique approach of designing a multithreaded processor from the ground up. Every aspect is carefully considered to form a balanced design rather than making incremental changes to an existing design and then ignoring problem areas. The general purpose parallel computer is an elusive goal. Multithreaded processors have emerged as a promising solution to this conundrum by forming some amalgam of the commonplace control-flow (von Neumann) processor model with the more exotic data-flow approach. This new processor model offers many exciting possibilities and there is much research to be performed to make this technology widespread. Multithreaded processors utilize the simple and efficient sequential execution technique of control-flow, and also data-flow like concurrency primitives. This supports the conceptually simple but powerful idea of rescheduling rather than blocking when waiting for data, e.g. from large and distributed memories, thereby tolerating long data transmission latencies. This makes multiprocessing far more efficient because the cost of moving data between distributed memories and processors can be hidden by other activity. The same hardware mechanisms may also be used to synchronize interprocess communications to awaiting threads, thereby alleviating operating system overheads. Supporting synchronization and scheduling mechanisms in hardware naturally adds complexity. Consequently, existing multithreaded processor designs have tended to make incremental changes to existing control-flow processor designs to resolve some problems but not others. Multithreaded Processor Design serves as an excellent reference source and is suitable as a text for advanced courses in computer architecture dealing with the subject.
From the Foreword..... Modern digital signal processing applications provide a large challenge to the system designer. Algorithms are becoming increasingly complex, and yet they must be realized with tight performance constraints. Nevertheless, these DSP algorithms are often built from many constituent canonical subtasks (e.g., IIR and FIR filters, FFTs) that can be reused in other subtasks. Design is then a problem of composing these core entities into a cohesive whole to provide both the intended functionality and the required performance. In order to organize the design process, there have been two major approaches. The top-down approach starts with an abstract, concise, functional description which can be quickly generated. On the other hand, the bottom-up approach starts from a detailed low-level design where performance can be directly assessed, but where the requisite design and interface detail take a long time to generate. In this book, the authors show a way to effectively resolve this tension by retaining the high-level conciseness of VHDL while parameterizing it to get good fit to specific applications through reuse of core library components. Since they build on a pre-designed set of core elements, accurate area, speed and power estimates can be percolated to high- level design routines which explore the design space. Results are impressive, and the cost model provided will prove to be very useful. Overall, the authors have provided an up-to-date approach, doing a good job at getting performance out of high-level design. The methodology provided makes good use of extant design tools, and is realistic in terms of the industrial design process. The approach is interesting in its own right, but is also of direct utility, and it will give the existing DSP CAD tools a highly competitive alternative. The techniques described have been developed within ARPAs RASSP (Rapid Prototyping of Application Specific Signal Processors) project, and should be of great interest there, as well as to many industrial designers. Professor Jonathan Allen, Massachusetts Institute of Technology
Rule-Based Programming is a broad presentation of the rule-based programming method with many example programs showing the strengths of the rule-based approach. The rule-based approach has been used extensively in the development of artificial intelligence systems, such as expert systems and machine learning. This rule-based programming technique has been applied in such diverse fields as medical diagnostic systems, insurance and banking systems, as well as automated design and configuration systems. Rule-based programming is also helpful in bridging the semantic gap between an application and a program, allowing domain specialists to understand programs and participate more closely in their development. Over sixty programs are presented and all programs are available from an ftp site. Many of these programs are presented in several versions allowing the reader to see how realistic programs are elaborated from `back of envelope' models. Metaprogramming is also presented as a technique for bridging the `semantic gap'. Rule-Based Programming will be of interest to programmers, systems analysts and other developers of expert systems as well as to researchers and practitioners in artificial intelligence, computer science professionals and educators.
Intelligent Unmanned Ground Vehicles describes the technology developed and the results obtained by the Carnegie Mellon Robotics Institute in the course of the DARPA Unmanned Ground Vehicle (UGV) project. The goal of this work was to equip off-road vehicles with computer-controlled, unmanned driving capabilities. The book describes contributions in the area of mobility for UGVs including: tools for assembling complex autonomous mobility systems; on-road and off-road navigation; sensing techniques; and route planning algorithms. In addition to basic mobility technology, the book covers a number of integrated systems demonstrated in the field in realistic scenarios. The approaches presented in this book can be applied to a wide range of mobile robotics applications, from automated passenger cars to planetary exploration, and construction and agricultural machines. Intelligent Unmanned Ground Vehicles shows the progress that was achieved during this program, from brittle specially-built robots operating under highly constrained conditions, to groups of modified commercial vehicles operating in tough environments. One measure of progress is how much of this technology is being used in other applications. For example, much of the work in road-following, architectures and obstacle detection has been the basis for the Automated Highway Systems (AHS) prototypes currently under development. AHS will lead to commercial prototypes within a few years. The cross-country technology is also being used in the development of planetary rovers with a projected launch date within a few years. The architectural tools built under this program have been used in numerous applications, from an automated harvester to an autonomous excavator. The results reported in this work provide tools for further research development leading to practical, reliable and economical mobile robots.
Leaf Cell and Hierarchical Compaction Techniques presents novel algorithms developed for the compaction of large layouts. These algorithms have been implemented as part of a system that has been used on many industrial designs. The focus of Leaf Cell and Hierarchical Compaction Techniques is three-fold. First, new ideas for compaction of leaf cells are presented. These cells can range from small transistor-level layouts to very large layouts generated by automatic Place and Route tools. Second, new approaches for hierarchical pitchmatching compaction are described and the concept of a Minimum Design is introduced. The system for hierarchical compaction is built on top of the leaf cell compaction engine and uses the algorithms implemented for leaf cell compaction in a modular fashion. Third, a new representation for designs called Virtual Interface, which allows for efficient topological specification and representation of hierarchical layouts, is outlined. The Virtual Interface representation binds all of the algorithms and their implementations for leaf and hierarchical compaction into an intuitive and easy-to-use system. From the Foreword: `...In this book, the authors provide a comprehensive approach to compaction based on carefully conceived abstractions. They describe the design of algorithms that provide true hierarchical compaction based on linear programming, but cut down the complexity of the computations through introduction of innovative representations that capture the provably minimum amount of required information needed for correct compaction. In most compaction algorithms, the complexity goes up with the number of design objects, but in this approach, complexity is due to the irregularity of the design, and hence is often tractable for most designs which incorporate substantial regularity. Here the reader will find an elegant treatment of the many challenges of compaction, and a clear conceptual focus that provides a unified approach to all aspects of the compaction task...' Jonathan Allen, Massachusetts Institute of Technology
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.