Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
The two-dimensional Hubbard model for interacting fermions on a square lattice is widely considered as a promising approach for the understanding of Cooper pair formation in the high-Tc cuprates. In the present work this model is investigated by means of the functional renormalization group, based on an exact flow equation for the effective average action. In addition to the fermionic degrees of freedom, bosonic fields are introduced which correspond to different collective orders, for example magnetism and superconductivity. The interactions between bosons and fermions are determined by means of the method of "flowing bosonization", describable as a continuous, scale- dependent Hubbard-Stratonovich transformation. This allows an efficient parameterization of the momentum- dependent effective interaction between fermions, and it makes it possible to follow the renormalization flow into the regimes with broken symmetries, where bosonic fluctuations determine the types of order which are present on large length scales. Numerical results for the phase diagram are presented, which include the mutual influence of the competing types of order.
The main task of this work has been to investigate the effects of anisotropy onto the propagation of seismic waves along the Upper Mantle below Germany and adjacent areas. By the 3D tomographic investigations have been done here for the crust and the upper mantle, considering the influence of anisotropy, a gap for the investigations in Europe has been closed. At the beginning, a precise analysis of the residuals (RES, the difference between calculated and observed arrivaltime) has been done which confirmed the existence of anisotropy for Pn-phases. The application of the elliptical correction for anisotropy in the Upper Mantle has resulted in a better fit of the vertical layered 1D-Model, compared to the results of preciding seismological experiments and 1D and 2D investigations. The simultaneous inversion has showed an optimization of the relocalization of the hypocenters and the reconstruction of the seismic structure in comparison to the geology and tectonic, as described by other investigations. The investigations for the seismic structure and the relocalization have been confirmed by several different tests.
Since the early 1990s, microfluidic lab-on-a-chip systems are the focus of much research. One of the most challenging objective is the design of a Micro Total Analysis System (µTAS), the integration of several laboratory procedures on a small microfluidic chip. These tasks include the injection and the preparation of the sample and the subsequent guidance by hydrodynamic or electromagnetic means to the functional sites of the device. It is at these sites where chemical reactions take place, followed by separation and detection of the products. Thus, µTAS-devices require for the controlling of the component that is to be analyzed. Here magnetic markers come into play: Attachment of a magnetic label to the analyt allows for the manipulation and the detection of the combined objects. In this regard, this work aims to improve the understanding of the possibilities and limitations of such approaches. A strong focus lies on the magneto-based recognition by magnetoresistive sensors for the detection and the position estimation of a magnetic particle. A constructive method is developed to design sensor arrays with a defined spatial resolution.
In the field of ab-initio calculation of the properties of atoms, molecules and solids, the solution of the electronic Schrödinger equation, an operator eigenvalue equation for the Hamiltonian of the system, plays a major role. Of utmost significance is the lowest eigenvalue of this Hamiltonian, representing the ground state energy of the system. To meet the requirements of the multitude of possible applications of the electronic Schrödinger equation, the last decades have seen the development of a variety of different methods designed to approximate the solution of this extremely high-dimensional minimization problem. The present work delivers a mathematical analysis for aspects of some of these methods used in the context of quantum chemistry calculation. Three approaches used in the algorithmic treatment of the electronic Schrödinger equation are analysed in detail: A "direct minimization" scheme used in Hartree-Fock, Kohn-Sham and in CI calculations, the Coupled Cluster method, being of high practical significance in calculations where high accuracy is demanded, and the common acceleration technique DIIS.
With the growing influence of information technology on all areas of life, the way people and companies conduct business was (and still is) influenced and adaptation of new technologies and paradigms (such as Service Oriented Architecture - SOA) is of highest interest. This book will address one of the hot topics, which support the alignment of IT and business, namely Service Level Agreements. The underlying assumption of this thesis is that with some enhancements and a merge of existing technologies, SLAs can be made product ready for certain industrial domains. Starting from a set of use cases from different domains (eHealth, car crash simulations and visualization), a SLA schema and an associated management framework architecture are derived and evaluated with respect to the use cases. In addition an outlook is given on future research topics to allow for further enhancement in this respective topic. The solutions presented in the book were partially derived from work of the author within national and international research activities such as SLA4D-Grid, BREIN, NextGRID or BEinGRID.
Supervised learning is a branch of artificial intelligence concerned with developing computer programs that automatically improve with experience through knowledge extraction from examples. Such learning approaches are particularly useful for tasks involving the automatic categorization, retrieval and extraction of knowledge from large collections of data such as text, images and videos. It builds predictive models from labeled data. However, labeling the training data is difficult, expensive, or time consuming, as it requires the effort of human annotators sometimes with specific domain experience. Semi-supervised learning (SSL) aims to minimize the cost of manual annotation by allowing the model to exploit part or all of the available unlabeled data. Semi-supervised learning and ensemble learning are two different paradigms that were developed almost in parallel. Semi-supervised learning tries to improve generalization performance by exploiting unlabeled data, while ensemble learning tries to achieve the same objective by constructing multiple predictors. This book concentrates on SSL with ensembles (committees).
Since non-classical crystallization pathways were revealed, our picture of nucleation and crystal growth became in the last year quite confusing and ramified: we are confronted with the existence of liquid phases of amorphous calcium carbonate, polymer-induced liquid precursor and, last but not least, pre-nucleation clusters. This contribution explores: (a) The symmetry-breaking phase-selection of calcium carbonate, which is based upon a subtle interplay of interlinked equilibria and is ultimately ascribed to the weak parity violation energy difference. (b) A morphogenetic employment of mesocrystallinity: the inter-crystalline minority constituent of a mesocrystal, e.g. occluded protein or polymeric additives, experiences compression-molding which can be employed for the preparation of nanotubes of various materials, e.g. calcium carbonate or cadmium sulfide. (c) The existence of a liquid intermediate phase during metal carbonate formation. By a diffusion-controlled and contract-free experimental setup, unequivocally evidence for the existence of nonclassical liquid intermediates, which precede the crystalline phase of bivalent carbonates at near-neutral conditions, is provided.
Studying the population dynamics of the surf clams Donax hanleyanus and Mesodesma mactroides from exposed sandy beaches off Argentina, there were found differences in the spatial and temporal scale of the population structure, growth and reproduction biology of this two dominant species. Histological analyses on three Argentinean sandy beaches with contrasting morphodynamics not only revealed the size at first maturity and a distinct reproductive cycle compared with historical data, but also negated the habitat harshness hypothesis which was commonly used for sandy beach communities. Furthermore, realizing fluorescent tagging-recapture experiments, a new time saving method to estimate growth was developed.
The experimental findings that the quark masses and mixing angles in the Standard Model are so much different is considered to be unnatural because they originate from a single mechanism - the Higgs mechanism. This is known as the quark flavour puzzle and in this thesis we discuss two approaches to resolve it. Incorporating the concept of minimal flavour violation, we promote the Yukawa entries to dynamical spurion fields in an effective field theory. Their vacuum expectation values sequentially break the maximal flavour symmetry of the Standard Model and generate the hierarchy of the quark masses and mixing angles. In Randall-Sundrum models the five-dimensional quark mass matrices render the effective four-dimensional Yukawa matrices hierarchical by a different quark localisation in the extra dimension. In both models we focus on possible flavour-changing neutral currents that are constrained by experiment.
Nuclear magnetic resonance (NMR) is a versatile technique relying on spin-bearing nuclei. Since its discovery more than 60 years ago, NMR and related techniques have become indispensable tools with innumerable applications in physics, chemistry, biology and medicine. One of the main obstacles in NMR is its notorious lack of sensitivity, which is due to the minuscule energy splitting of the nuclear spins at room temperature. Appropriately, the inherent low polarization allows for a theoretical sensitivity enhancement of more than 10,000. The NMR signal enhancement of protons which can be achieved by means of Dynamic Nuclear Polarization (DNP) is approximately 660. In this book, different hardware aspects and polarizing agents for DNP were studied. The results show the potential of DNP, especially at a magnetic field of B=0.35 T, when it comes to the polarization of nuclei with a very low magnetogyric ratio which should result in many new applications. These presented components and designs could be the foundation to open up the application of a mobile DNP polarizer for medical applications.
The work is devoted to the modeling of acoustic waves in multi-layered structures surrounded by a fluid and consisting of different kinds of materials including piezoelectric materials and composite multilayers. It consists of three parts. The first part describes the modeling of an acoustic sensor by the finite element method. The existence and uniqueness of a time-harmonic solution are rigorously established under physically appropriate assumptions. The convergence of Ritz-Galerkin solutions to the exact solution is proved. The second part of the work describes a semi-analytical method for the fast calculation of dispersion relations for plane acoustic waves in multi-layered structures. The software implementing this approach is presented. The third part investigates a number of issues of the homogenization theory for linear systems of elasticity. The limiting equations are rigorously derived by the two-scale method and an error estimate is established. For the case of laminated structures, an explicit formula for the elasticity tensor of the homogenized material is derived.
In this work, the influence of the additions of Zn, Zr and Ce-Mischmetall on the casting, indirect extrusion processing, microstructural development and resulting mechanical properties of Mg and Mg-alloys were investigated. It was found that grain size of the cast alloys is controlled by a grain growth factor (Q) mechanism. Q predicts the grain size of the cast billet, a parameter which influenced strongly the deformation response of the alloy during extrusion. Zener-Hollomon parameter (Z) was determined using process variables. Z correlated alloy dependent deformation variables during extrusion with the resulting average recrystallised grain size. Texture measurements revealed a singular recrystallisation development. Obtained results show correlations between initial alloy conditions and process variables indicating that the resulting microstructure and mechanical properties can be estimated by an appropriate set-up of selected alloy compositions and process parameters. Most of these correlations were based on proved phenomenological assumptions, which make them reliable values of reference for additional research and development of Mg wrought alloys.
Local studies of vortex distribution in superconducting (SC) thin films and their pinning by natural and artificial defects were performed using low-temperature magnetic force microscopy (MFM). The depinning of vortices by the MFM tip was visualized and the local pinning force was estimated in a good agreement with global transport measurements. It was shown that the presence of an ordered array of ferromagnetic dots being in a magnetic vortex state underneath the SC film significantly influences the natural pinning landscape of the superconductor leading to commensurate pinning effects. This strong pinning exceeds the repulsive interaction between the SC vortices and allows vortex clusters to be located at each dot. For industrially applicable YBCO films the main question discussed was a correlation between vortices and artificial defects as well as vortex imaging on rough thin films. Since the surface roughness causes a severe problem to the scanning tip, a nanoscale wedge polishing technique was developed. Mounting the sample under a defined small angle results in a smooth surface and a monotonic thickness reduction of the film along the length of the sample.
Cytochrome P450 enzymes (P450s) are heme b containing monooxygenases that introduce one atom of molecular oxygen into a vast range of compounds and catalyze a broad spectrum of reactions, where chemical catalysts often fail. Although the biotechnological potential of these biocatalysts was recognized many years ago, their industrial applications are still rare due to several limitations: The unknown physiological function of most P450s makes the selection of candidate enzymes for a biotechnological process time- and labor intensive. Furthermore, suitable electron transfer proteins are mandatory for efficient P450 biocatalysis. Within this study novel P450s whose oxidizing activities lead to high-value fine chemicals were identified and characterized. A whole cell process with recombinant Escherichia coli was developed to produce the sought-after fragrance (+)-nootkatone. The application of physiological redox proteins thereby greatly reduced uncoupling between NAD(P)H consumption and substrate oxidation by the P450s, which led to improved biocatalytic activities. This book addresses researchers and companies working in the field of biocatalysis and biotechnological research.
One key scientific program of the MAGIC telescope project is the discovery and detection of blazars. They constitute the most prominent extragalactic source class in the very high energy (VHE) gamma-ray regime with 29 out of 34 known objects (as of April 2010). Therefore a major part of the available observation time was spent in the last years on high-frequency peaked blazars. Between August 2005 and April 2009, a sample of 24 X-ray selected high-frequency peaked blazars has been observed with the MAGIC telescope. A subset of 20 blazars previously not detected will be treated more closely in this work. In this campaign, during almost four years approximately 450 hours of the available observation time were dedicated to investigate the baseline emission of blazars and their broad-band spectral properties in this emission state. Apart from calculating the integral flux upper limits for these objects in the VHE regime, a stacking method was applied to the sample. An excess of gamma-rays was found with a significance of 4.5 standard deviations in 349.5 hours of effective exposure time. For the first time a signal stacking in the VHE regime turned out to be successful.
This thesis is concerned with the control of quantum systems. Given a Hamiltonian model of a quantum system, we are interested in finding controls-typically shaped electromagnetic pulses-that steer the evolution toward a desired target operation. For this we employ a numerical optimisation method known as the GRAPE algorithm. For particular experimental systems, we design control schemes that respect constraints of robustness and addressability, and are within the reach of the experimental hardware. Applications include the preparation of cluster states in a system of trapped ions, the implementation of the two-qubit Deutsch and Grover algorithms on a pair of carbon nuclei at a nitrogen-vacancy center in diamond, and the implementation of quantum gates on a grid of coupled superconducting qubits. In some special cases analytical solutions are obtained. The methods applied here are fairly general and can be adapted to a variety of other physical systems and tasks.
In this thesis, two aspects of control theory, namely controllability and optimal control, are applied to quantum systems. The presented results are based on group theoretical techniques and numerical studies. By Lie-algebraic analysis, the controllability properties of systems with an arbitrary topology are described and related to the symmetries existing in these systems. We find that symmetry precludes full controllability. Our work investigates well-known control systems and gives rules for the design of new systems. Furthermore, theoretical and numerical concepts are instrumental to studying quantum channels: Their capacities are optimised using gradient flows on the unitary group in order to find counterexamples to a long-established additivity conjecture. The last part of this thesis presents and benchmarks a modular optimal control algorithm known as GRAPE. Numerical tests show how the interplay of its modules can be optimised for higher performance, and how the algorithm performs in comparison to a Krotov-type optimal control algorithm. It is found that GRAPE performs particularly well when aiming for high qualities.
Charles S. Peirce's Self-Corrective Thesis (SCT) is based on the idea that the progress in science lies in its self-corrective methods. Particularly, Peirce's notion that scientific method consisted of four self-corrective inferences (abduction, deduction, qualitative and quantitative induction) is controversial in the philosophy and history of science. Supporters hold that all the aspects of scientific inference, introduced by Peirce, contribute to its self-correction, while critics claim that the justification for the self-corrective character of scientific method is inadequate. Some critics argue that the justification for the self- corrective character of abduction is insufficient, while others maintain that from all four methods only quantitative induction is proved to be self- corrective. In this project the author explores Peirce's proposed scientific methodology and discusses it in comparison with these objections, so as to defend the SCT and distinguish the context of its validity. He appeals to the historical case of the Chemical Revolution and discusses its interpretations by different methodological views in order to evaluate the SCT.
Verification problems are often expressed in a language which mixes several theories. A natural question to ask is whether one can use decision procedures for individual theories to construct a decision procedure for the union theory. The setup considered in this book is that of one base theory which is extended by one or more theories. The question is if and when a given problem in the extended setting can be effectively reduced to an equivalent problem over the base theory. A case where this is always possible is that of so-called local theory extensions. The theory of local extensions is developed and some applications are given. It will be shown that a suitable fragment of both the theory of arrays and the theory of pointers is local as well.Finally, the case of more than one theory extension is discussed. The reductive approach outlined above has become particularly relevant in recent years due to the rise of powerful solvers for background theories common in verification tasks. These so-called SMT-solvers effectively handle theories such as real linear or integer arithmetic.
The stability of colloidal dispersions like foam is governed by the interactions in the thin liquid films between the compartments. For a better understanding of the macroscopic foam, single foam films are investigated. The focus of this study is the effect of oppositely charged polyelectrolyte/surfactant mixtures on foam film stability. For this purpose, mainly mixtures of cationic surfactants and anionic polyelectrolytes around the isoelectric point (IEP) are used. Since both components are oppositely charged, they can form highly surface-active complexes. The results of the TFPB measurements show that the general properties of foam films formed from these mixtures are very similar throughout all systems. A reduction of foam film stability is detected slightly below the nominal IEP of the system and very stable foam films are found in the concentration regime above the IEP. However, the surface characterisation of the air/water interface reveals that this phenomenon is not due to a charge reversal at the interface. Furthermore, the results show that the properties of the foam films depend on the polymer chain length and the hydrophilic/hydrophobic balance of the components.
This thesis presents back-end processing steps for polymer labs-on-a-chip which permit the realization of complex applications like biological assays on-chip. Pre-treatment for surface cleaning, hydrophilization to promote capillary action, selective hydrophobization for flow control, dry reagent pre-storage to enable fully integrated chips as well as biocompatible sealing to ensure operation and biological activity after processing are covered. Most of these processing steps are developed on a sample lab-on-a-chip which aims for the on-chip detection of mRNA by nucleic acid sequence based amplification (NASBA) for e.g. cervical cancer diagnostics.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.