Markedets billigste bøger
Levering: 1 - 2 hverdage

Bøger udgivet af MOHAMMED ABDUL SATTAR

Filter
Filter
Sorter efterSorter Populære
  • af Harinder Singh
    353,95 kr.

    The development in medical field leads to produce massive amount of medical data. In 2002, more than 12000 images a day were produced by the Department of Radiology of a hospital in Geneva. The medical datasets are available for further exploration and research which have far reaching impact on the progress and execution of health programs. The information archived from exploring medical datasets paves the way for health administration, e-health diagnosis and therapy. So, there is urgent need to accentuate the research in medical data. The medical data is a huge growing industry and its size normally lies in terabytes. Such a big data puts forward many challenges and issues due to its large volume, variety, velocity, value and variability. Moreover, the working of traditional file management systems is slowing down due to its incapability of managing unstructured, variable and complex big data. The managing of such big data is very cumbersome and time-consuming task which requires new computing techniques. So, the exponential growth of medical data has necessitated a paradigm shift in the way the data is managed and processed. The recent technological advancements influenced the way of storing and processing big data. This motivated us to think about finding new solutions for managing volumetric medical datasets and to obtain valuable information efficiently. Hadoop is a top-level Apache project and is written in Java. Hadoop was developed by Doug Cutting as a collection of open-source projects. It is presently used on massive amount of unstructured data. With Hadoop, the data can be harnessed that was previously difficult to analyze. Hadoop has the ability to process extremely large data with changing structure. Hadoop is composed of different modules like HBase, Pig, HCatalog, Hive, Zookeeper, Oozie and Kafka, but the most common paradigms for big data are Hadoop Distributed File sSystem (HDFS) and MapReduce.

  • af Manoj Kumar M.
    363,95 kr.

    Voronoi diagrams, named after the Russian mathematician Georgy Feodosievych Voronoy, are geometric structures that have been extensively studied in Computational Geometry. A Voronoi diagram can be constructed for a set of geometric objects called sites. The geometric objects can be points, lines, circles, curves, surfaces, spheres, etc. A Voronoi diagram divides the space into regions called Voronoi cells such that there is a Voronoi cell corresponding to each object in the input set. A Voronoi cell is a set of points closer to the corresponding object than any other object in the input set. Applications of the Voronoi diagram can be seen in various ¿elds of science and engineering. Voronoi diagram is used in the motion planning and collision avoidance for robotic motion. Candeloro used the Voronoi diagram of a point set in a 3D space to develop a path-planning system for Unmanned Underwater Vehicles (UUVs). In town planning, the Voronoi diagram can be used for the e¿cient allocation of resources. Image segmentation and feature extraction are other areas where properties of the Voronoi diagram are utilized. similar applications can be seen in disease research and diagnosis. Voronoi diagram of a set of circles is used for sensor deployment where the radius of a circle represents the range of a sensor. It is also used in cable packing of electric wires to ¿nd the enclosing circle with a minimum radius.

  • af Hebrew Benhur Crispin
    353,95 kr.

    The early twentieth century witnessed fundamental changes in our understanding about the nature of light and matter. This was primarily due to the inadequacy of classical physics in explaining certain physical phenomena, such as the blackbody radiation and the photoelectric effect. A major conceptual breakthrough came first in the year 1900 with Planck's hypothesis of energy quanta to account for the blackbody spectrum. The idea of the quantization of energy was then applied to radiation by Einstein in his investigation of the photoelectric effect. He considered light to be discrete, localized wave packets called "light quanta" or "photons". Several experiments that followed gave evidence to the wave-particle duality of the radiation field and matter, which culminated in the formulation of quantum mechanics. Since its inception, the field has seen tremendous progress with applications from lasers, transistors, light emitting diodes, magnetic resonance imaging to quantum computing. The invention of laser was certainly an extraordinary achievement. It revolutionized the industry and enabled access to new physical regimes of atom-light interaction. Around the same period, the optical experiments of Hanbury Brown and Twiss revealed unexpected photon correlations, demonstrating the need for quantum mechanical methods to describe the statistics of light. The availability of tunable lasers later opened up the exciting field of quantum optics. Since then many intriguing and fascinating phenomena have been predicted and observed experimentally such as squeezed states of light, photon antibunching, entanglement, trapping of atoms, optical frequency comb and quantum computation

  • af Ujjwal Ranjan Dahiya
    378,95 kr.

    Nanosized materials possess unique properties because of their sub micrometre size (less than 1µm) and thus find wide array of applications in optics, electronics and biology. In biology they are used for delivery of therapeutics molecules and drugs (nanocarrier), imaging, theranostic and ablation therapy.A wide range of biotechnological applications are served by recent advances in nano- formulation approaches and design of novel nanocarriers, capable of efficiently delivering biotherapeutic molecules. These nanosized structures are much more efficient than tradition therapies in delivering dugs and other molecules to specific locations in controlled manner, even at much smaller dosage. Conventional pharmaceutical delivery systems are marred with multiple issues like poor specificity, drug resistance induction and toxicity, which severely decreases therapeutic efficiencies. This is where nano-based delivery approach comes to rescue and improve rapid clearance, off -target effects, un-controlled release and toxicity issues. Nanocarrier based delivery approaches are mostly dedicated to transport chemotherapeutic cargos, composed of colloidal nonentities and possess a high surface area to volume ratio. Apart from delivering therapeutic cargoes Nano formulations are also used for theranostic and ablation therapies. Nano based delivery systems can be broadly categorised as organic nanocomplexes and inorganic nanoparticles-based formulations. These particles allow a great degree of manoeuvrability in terms of their composition (organic, inorganic or hybrid), shape (sphere, rod, multilamellar, hyperbranched etc.), size (small and large), and surface functionalization (PEGylation, targeting moieties, functional groups etc.) as per the specific requirement.Carbon based nanomaterials which exhibits high cargo loading and biocompatibility are known as organic nanocomplexes and can be used for delivery purposes. Preparation of organic nanocomplexes can be based on self-assembly properties of these molecules (amphiphilic systems), or requires specific synthesis procedure. Recently much attention is drawn by micelle like nanocomplexes prepared by amphiphilic polymer for drug delivery applications. Also, polymerosome have been reported which shows intracellular microenvironment and tumor responsive properties, which allows triggered payload release (pH, temperature gradient, redox etc.) and improved imaging sensitivity. Poly lactic acid (PLA), saturated poly glycolic acid (PGA), poly a-hydroxy esters, and poly lactic-co-glycolic acid (PLGA) copolymers are FDA approved and most commonly used synthetic polymer for drug delivery applications

  • af Mahak
    338,95 kr.

    The development of the World Wide Web has huge and increasing amount of data and information. This huge data provided by the web has become an important information resource for all internet users. Furthermore, the low cost of web data makes it more attractive to researchers. Users can retrieve web data by browsing and keyword searching. Still, it may have lots of limitations to these techniques. Normally, several web links are getting users while they browsing for one data so it is hard for researchers to retrieve data efficiently. Mostly, pages in web contain both hyperlinks and text documents to other documents. Furthermore, mailing lists, newsgroups, and forums are considered as another form of data sources. Thus, web mining can also support web mining design and implementation but this becomes provocation for people with extracting web information. Web mining is able to support the web information sources based on user needs, including knowing availability, importance and relevance of web systems; it should be able to select extracted data, because both related and not related information are present in Web sites; it should be easy to collect data and then analyze and help to build models and produce validity. Internet users have improved significantly over the last decade and endure to growth. Through the user numbers, data available in web, persistent to increase exponentially. E- commerce is one of the application areas related to information mining. Fast growth of internet users has improved e-business applications. Many attempts are made on "breaking the syntax barrier" in web and many of them depend on text corpora of the semantic information used completely by statistical techniques. Ontology framework plays a key role in semantic web along with the artificial intelligence purposes and these contain Resource Description Framework (RDF) and XML. Ontology has developed into a necessary model tool for applying various intelligent systems which represent domain knowledge, which is easily understandable by both humans and machines. Ontologies play important role to make interoperability efficient and smooth, among heterogeneous systems. Ontology basically provides the link between particular domain concepts. The aim of ontology is to attain good knowledge about that system that can be circulated between people and application framework and intend to gain knowledge of domain and their role involves the semantics design exactly in a generic method, offering the agreement premise inside the area. In general, Ontology covers four key components namely: instances, concepts, axioms and relations. Concept was key element and is a basic domain with collection of group or objects or abstract set and normally means a common knowledge shared among group of members.

  • af Chandreswara Raju Kataru
    353,95 kr.

    Down syndrome (DS) is a most common human congenital anomaly with several physical and mental traits, caused by an extra chromosome present on 21st chromosome, also known as trisomy 21. Trisomy 21 is caused by abnormal segregation during meiosis with around 90% cases of mothers with the incidence of 1 in 800 live births. There is an increased tendency of miscarriage with trisomic fetuses, thus the DS patients develop various medical conditions. In developed countries the life expectancy of the DS has increased up to 55 years with advances in medical treatment. An extra maternal chromosome involves predominantly 90% of trisomic conditions, mainly due to meiotic errors in the egg. In humans the frequent cause of fetal death is due to the chromosomal abnormality. Within the initial 15 weeks of gestation period, half of the spontaneous abortions are chromosomally aneuploid, and approximately 50% of trisomies occurs due to spontaneous abortions.

  • af Preeti Verma
    338,95 kr.

    Solar radiation estimation is an important parameter in engineering applications including solar power plant modelling, photovoltaic cell modelling, and solar heating system modelling. Therefore, the proper estimation of solar radiation is necessary. In recent years, solar radiation prediction models have been established based on parameters including ambient temperature, sunlight period, humidity and cloud coverage estimated from traditional meteorological stations and analyzed indirectly as a function of solar radiation. These models are divided into two categories: artificial intelligence-based parametric methods like Angstrom, and nonparametric methods. It has been found in the literature that data on solar radiation can be calculated using these models at a specific location. One of the easiest ways of measuring solar radiation on the surface is to use sensor data from ground sites, over existing ground points, it also provides high temporal resolution projections of incoming solar radiation. This strategy, however, has a number of technological and financial drawbacks, including high costs and the need for fully skilled labor, as well as the need for daily solar sensor maintenance, washing, and calibration. Ground sensor networks, on the other hand, are hardly ever available insufficient spatial coverage to address spatial pattern. Solar radiation obtained by satellite is a trustworthy instrument to measure solar irradiance at ground level in a wide region. In addition, hourly values obtained were at least as precise as interpolation at a distance of 25 km from ground stations. Multispectral sensors are usually used on satellites to characterize environmental conditions such as light dispersion, reflection and absorption by ray leaves, water vapors, ozone, aerosols and clouds, as the amount of radioactive radiation emitted by the atmosphere not only affects the distribution of the atmospheric components but also the sensitivity of the sensor. The large variety of observation techniques of satellites are thus intended to be perfect for the measurement of spatial variation in solar radiation. Satellite imaging may be utilized in two ways: to design complicated models of radiation transmission utilizing atmospheric characteristics from multi-spectral pictures, or to search for table-based models associated with the radiation process' physical parameterization.

  • af Prachi Telang
    378,95 kr.

    Phase transitions and new phases of matter have continued to challenge our understanding of the quantum condensed matter systems. For decades, phase transitions were thought to be facilitated by the breaking of some symmetry. For example, magnetic ordering emerges due to breaking of the time-reversal symmetry; the crystallization of water into ice, on the other hand, involves broken translational symmetry. This paradigm, however, was challenged with the discovery of topological order in some condensed matter systems in the year 1982. Since then many electronic phases with novel topologies have been predicted theoretically and few have also been realized experimentally. Examples of these electronic phases, across the research frontier, span from graphene to the topological insulators and beyond. Still, several theoretical predictions of various novel topological phases are yet to be realized experimentally. Due to this reason, signi¿cant efforts are concentrated around materials with strong spin-orbit interaction, which provide a fertile ground for experimental realization of novel topological phases. Much of condensed matter is about how different kinds of order emerge from inter- actions between many simple constituents. An ideal example of this is the family of transition metal oxides (TMOs) where a plethora of novel phenomena such as high-TC superconductivity, colossal magnetoresistance, and metal-insulator transitions could be observed as a result of the complex interplay of various electronic interactions such as electron correlations, crystal ¿eld splitting and spin-orbit coupling. The magnitude of these interactions depends on a number of factors like the atomic number of the transition metal, underlying crystal geometry, ligand environment around the transition metal, etc. Understanding the interplay of these interactions which ultimately decides the ground state is one of the fundamental problems in condensed matter physics. Failure of band theory to explain the insulating ground state in TMOs highlighted the importance of electron-electron interactions. In TMOs, the conduction is driven by d- orbitals which are spatially compact when compared to the s- or p-orbitals in simple metals. This results in strong Coulomb repulsion between the d electrons of TMOs. The transition from metallic to insulating state due to strong electron correlations was successfully modelled theoretically via Hubbard model given by John Hubbard.

  • af Anil Kumar
    338,95 kr.

    Energy Sources can be classified as primary and secondary energy sources. Primary energy sources are normally categorized as renewable and non-renewable on the basis of their depleting characteristics. Renewable energy derived from natural resources and they are automatically replenished. It is also known as clean energy sources. Secondary energy sources are derived from transformation of primary energy sources i.e. heat and electricity. Renewable energy sources are inexhaustible but as per Indian climatic conditions solar energy is most suitable energy source. Solar Photovoltaic (PV) and solar thermal both are efficient, getting popularity all over the globe. Selection of them is depends on utility, suitability. For energy storage and 24x7 energy supply solar thermal technology is getting more popularity than solar PV technology. Solar thermal technology is also acknowledged as Concentrating Solar Power (CSP). CSP systems could be cost-effectively feasible at minimum 1600-1800 kWh/m2/year Direct Normal Irradiance (DNI) by utilizing novel technologies, substances, economies of scale and supporting renewable policies, etc. CSP systems have a range of prime objectives. These are to work environmentally safe, to diminish primary cost and ground area, to increase long-term system trustworthiness, to make possible ease in service and maintenance. Sequestration of one ton of Carbon Dioxide-Equivalent (C02-e) equivalent to one CER unit. CSP technologies can be categorized as line and point focus,

  • af Vinayak N Kulkarni
    358,95 kr.

    The bone is a functional organ associated with other elements such as vascular systems, cartilage, connective tissues and nervous components. Body movement is possible when these functional organs act together with skeletal muscles. Large bones are more prone to injury in the skeletal system compared to small and medium bones. Injured bones are treated through internal and external fixation of implants. Apart from long bone injury, joint replacement is another prime intervention where the bone is anticipated to host a biomaterial. The regeneration process of a patient's injury depends on the nature of the response of the bone to the implanted biomaterial. The demand for biomaterials is increasing year by year. The field of biomaterials has been on a continuous boom for the last few decades because of the increase in the ageing population and growth in the average weight of people across the globe, apart from common accidents and sports injuries. Heart, blood vessels, shoulders, knees, spine, elbow, ears, hips, dental structures etc., are a few parts of the body in which biomaterials are used as artificial valves, stents, replacement implants and bone plates. More replacements have been registered for hip, knee, spine and dental-related problems. By the end of the next decade, towards 2030, it has been estimated that the number of total knee arthroplasty surgeries is forecasted to grow by 673%, and total hip replacement will increase by 174% compared to the present replacement rate. The mechanical qualities of the bone deteriorate and are degraded as a result of excessive loading on bones and joints and the lack of a normal biological self-healing process, which causes degenerative illnesses. One of the best solutions to such problems is to use artificial biomaterials as an implant of suitable size and shape, which supports the function of restoring the body movement in the affected joints and compromised structures. No doubt that the replacement surgeries have dwelled, but on the other side, the revision surgeries of knee, hip and spinal implants have been increasing rapidly, which is a matter of concern. These revision surgeries are expensive with a low success rate and, most importantly, painful for the patients. The targets of the present day's researchers are not just to develop appropriate bio-implants but also to select suitable machining techniques and proper procedures to produce such bio-implant alloys, for improving the quality of the implant material to avoid revision surgeries.

  • af Charu Sharma
    363,95 kr.

    In recent years, Wireless Sensor Networks (WSNs) have gained global attention as they provide the most cost-effective solutions to a variety of real-world problems. However, because of the wireless communication and lack of physical medium, these networks are vulnerable to many cyber-attacks. As a result, false information is easily injected to bring the network performance down. The communication of sensor nodes (SNs) acquires different types of security threats. Due to the limited storage and low power of SNs, security solutions are unachievable. As a result, extensive research is required to achieve security in WSNs. Different issues related to security in WSNs have been analyzed and discussed in this chapter, along with various research objectives implemented in this thesis in the field of WSNs. WSNs offer immense benefits due to their low-cost solutions, flexibility to work in different environments, ease of deployment (no cables necessary), easy troubleshooting, minimal installation cost and high performance. These networks consist of autonomous, small-sized, portable smart SNs or motes. The nodes are randomly deployed in an unattended fashion and connected via wireless communications. WSNs are a rapidly developing technology that can be utilized in a variety of dangerous and unusual conditions including forest fire detection, battlefields, agriculture, industries, health care, oceanography and wildlife movement monitoring, as well as used to automate mundane tasks. WSNs are highly scalable, versatile and ideally suited for real-time monitoring because of their low maintenance. The sensor network is made up of SNs that sense data and relay it to the cluster head (CH). The role of a CH is to gather data, process it and communicate with other CHs in order to pass the information to the base station (BS). The SNs have restricted resources in terms of power, bandwidth,

  • af Samraj Mollick
    378,95 kr.

    Porosity is an important feature of a material. In a conventional sense, porous materials must have permanent, interconnected voids that are permeable to gases or liquids. Generally, Porosity can be observed in rocks, ceramics, soils, biological tissue, charcoal, dried plant husks, and have been utilized for filtration, purification, petrochemicals, cooling, etc.[1-4] In modern times, there is a growing demand for utilization of porous solids not only limited to adsorption and catalysis, but they are also deployed in different sectors including energy, environmental, and health department. The porous solids have ubiquitous influence on our society and are indispensable in our daily life for a long time. However, the target-specific applications are almost impossible with such traditional porous solids as the structure- property relationship remains unknown. Therefore, the search for a new class of porous materials is essential where fine-tuning of order and functionalities at the atomic level could be easily attained. Such exquisite control over the entire structures of the porous materials permits the tailoring of materials for challenging and appealing applications that are still not realized by such traditional porous solids. This newly developed special class of porous materials have been termed as advanced porous materials (APMs). The discovery of APMs at the end of eighties has brought a revolution in chemistry and material engineering. The participation of the ever-escalating number of researchers in this newly developed field owing to the possibilities of unlimited chemical and structural features in this class of materials present scope for imaginative chemists. The unique arrangement of organic, inorganic or combination of both moieties affords a new class of porous materials, potentially complementary and even much more attractive compared to traditional porous solids. APMs encompass a wide range of materials i.e., metal-organic frameworks (MOFs), porous organic materials (POMs), metal-organic polyhedra (MOPs), metal-organic gel (MOG) etc. The research interest in APMs have skyrocketed in various aspects pertaining to its several key features. Firstly, APMs can be designed to feature high surface area and well-defined functional pores. Secondly, some of them can be readily molded into monolithic forms or thin films which provide substantial advantages in many practical applications. Additionally, few of them can even be dissolved in a common solvent and processed into workable forms without compromising their inherent porosity, which is almost impossible to imagine with conventional porous solids. Furthermore, stimuli-responsive APMs can also be designed that are capable of reversibly switching between the open and closed porous state after applying an external stimuli.

  • af Harshad Paithankar
    353,95 kr.

    RNA performs a broad range of vital cellular functions. They serve as information carriers, form part of ribosome, exhibit catalytic activities, involved in gene regulation, etc. Most of these functions involve the interaction of RNAs with proteins. For example, the genetic information contained in DNA is transcribed to RNA assisted by RNA polymerase, the ribosome contains tRNA molecules bound to aminoacyl-tRNA synthetase that are responsible for the translation of information encoded in the mRNA, etc.. Another example of protein-RNA interaction involves the processing of RNAs by ribonucleases that form the ribonucleoprotein complex (RNP). These complexes are involved in gene regulation, antiviral defense, immune response, etc. The formation of the correct complex is governed by the different types of interactions between the amino acids of the protein and the nucleobases or ribose sugar or phosphodiester backbone of the RNA that include hydrogen bonding, electrostatic, stacking interactions, etc. Any irregularities in these interactions can lead to dysfunction of the complex and can lead to diseases such as cancer, cardiovascular dysfunction, neurodegenerative diseases, to name a few. Regardless of the large number functionally significant protein-RNA interactions known, the molecular mechanisms of interactions between the RNA and the protein are poorly understood. An important aspect of understanding these mechanisms involves the characterization of these interactions at structural, kinetic, and thermodynamic levels. RNA molecules can fold into a variety of secondary and tertiary structures that are formed due to the various type of base-pairing interactions between the nucleobases. The base-pairing in RNA can vary from most commonly observed Watson-Crick pairs to others like Hoogsteen, Wobble, reverse Watson-Crick, reverse Hoogsteen, etc1. Proteins can recognize RNAs by interacting with a single- stranded RNA and/or secondary and tertiary structures like an RNA duplex, duplex with internal bulge or mismatch, stem-loop structures, quadruplexes, etc.

  • af Rashmi Pandey
    353,95 kr.

    As we become more dependent on electricity to accomplish our daily needs, the world has been facing challenging issues related to the supply of energy for energy­ based appliances. As we know, the greenhouse effect and industrialization expansion cause Ozone layer depletion, climate change, changes in weather patterns, and continuous release of carbon dioxide (CO2). An important issue that has been raised is required to develop a new energy harvesting technique that fulfils the energy demands. Ambient energy is the best substitute to produce energy by the environment and also reduces environmental pollution. Radiofrequency signals are extensively used to transmit information, though, in the field of telecommunications it continues to develop individually from wireless power transfer (WPT). However, RF signals cannot transfer information only, but the available energy can also be harvested by the RF energy harvester. Radio Frequency Energy harvesting (RFEH) is a concept of green and clean energy, where the RF energy is captured by the environment and converting it into electrical energy. It has become a developing technology and increasing research interest in the last five years. It would be a great choice to supplement the present energy harvesting technologies for example solar, thermal, wind, and vibration energy. In the last few years, RF energy harvesting has extended the research interest from environmental energy. With the rapid growth of RF sources such as cellular systems, many wireless communication devices, FM transmitters. receivers, cellular mobile stations, Wi-Fi devices, and TV stations, the radiation power of ambient energy is increased rapidly which empowers the small powered devices. Harvesting the energy from the available RF signals enables the powering of Jaw-power devices such as wireless sensor networks and medical implants. It could make devices self-sustainable and easy to operate by prolonging the operation period of the battery or eliminating the need for batteries. It has a lot of applications in the field of the Internet of Things (IoT) and this would be beneficial to power up the devices remotely.

  • af Megha M Kartha
    373,95 kr.

    Modern wireless communication systems require high data rates to provide acceptable quality of service. However, this poses challenges in the allocation of spectrum, which is a scarce re- source. To overcome this, the Multiple Input Multiple Output (MIMO) con¿guration has been developed, which has excellent spectral e¿ciency and data rates, as proven by mathematical modelling. The major hurdle in wireless communication systems is achieving good isolation between antenna elements while also maintaining a miniaturized structure. Therefore, reducing mutual coupling and signal correlation is essential for better performance. In recent years, there has been a remarkable evolution in satellite systems, leading to enhanced reliability in mobile communication. Initially, satellite performance was constrained, necessitating the use of large ground station antennas and öering only limited services. However, with advancements, satellites have become more robust, capable of delivering a broader spectrum of services with improved quality. Multiple Input Multiple Output (MIMO) has emerged as an important technology for enhancing the e¿ciency and data rates of wireless networks and terrestrial communication standards. Consequently, there is growing interest in employing MIMO techniques in satellite communication systems. However, implementing MIMO in satellite communication poses unique challenges in comparison to the terrestrial domain. As the demand for satellite capacity continues to grow, MIMO has emerged as a promising solution to increase spectral e¿ciency without the need for additional power. The utilization of the spatial domain is the primary source of enhanced channel capacity. Extensive literature exists that provides a comprehensive review of MIMO techniques speci¿cally designed for satellite communication.

  • af Lalit Kumar Singh
    373,95 kr.

    Enzymes are known to humanity since ancient times and getting increased impetus in the recent years due to its increasing importance in industries. Proteins enzymes are essentially created by living things to control their biochemical activities. Enzymes can also be thought of as catalysts that speed up biological reactions on their own. Based on their catalysed reaction types, enzymes are classified in six categories as oxidoreductases, transferases, hydrolases, lyases, isomerases, ligases. Many different biochemical events occur naturally inside the body and they all follow a particular set of rules facilitated by enzymes as metabolic catalysts. Enzymes decrease the activation energy of various biological reactions essential to human survival without altering their overall structure, making them essential for many biological processes. Many organisms have enzymes from the tiniest microorganisms to the most significant land and sea animals. Enzymes are only found in trace amounts in plants and animals thereby making them unavailable for commercial use. On the other hand, enzyme synthesis in microorganisms has several benefits over traditional methods including simplicity of use, quick multiplication under controlled circumstances and amenability to genetic modification, high production yield and so on. Several peculiar properties of microbial enzymes such as its catalytic action, specificity, stability, non-toxicity, environmentally benign nature, cost-effectiveness and simplicity of manufacture etc. are attracting increased interest for their use in industry. Biocatalysts for the regulated synthesis of several products from diverse substrates and microbial enzymes have gained widespread recognition in recent years. In addition, several microbial enzymes can efficiently biodegrade or bioconvert hazardous chemicals including various compounds containing nitrile, amine, carboxylic and phenolic groups into usable products. Industrial-scale production of microbial enzymes is essential due to their versatility. Microbial laccases are one of the most important enzymes that are produced by various microorganisms including fungi, bacteria, and actinomycetes. These enzymes are copper-containing and known to catalyze the oxidation reaction of a veriaty of substrates, including phenols, aromatic amines, and lignin-derived compounds. Microbial laccases are particularly interesting for industrial applications due to their high stability, broad substrate specificity, and low cost of production.

  • af Radhika K
    338,95 kr.

    Stress is a complex problem that has multiple e¿ects on the body and the mind. Stress is the body's response to overwhelming demands or challenges from a scenario that manifests as emotional, physical, or behavioural changes. The way an individual sees the scenario has a signi¿cant impact on how stressed they are. When an individual faces a challenge in achieving a goal, they evaluate the scenario in two stages. The need to achieve the desired goal is accessed in the ¿rst stage, and the external and internal resources to meet the challenge are accessed in the second stage. Every individual is exposed to a stressful scenario at some point in life and will react accordingly. Various circumstances may be thought of that have the potential to be experienced over and over again, even multiple times every day. Stress levels may develop considerably in certain students and appear with signs of anxiety, notably during examinations, which are termed academic stress. The cause of academic stress is considered as fear of an exam due to lack of interest in a particular subject or lack of proper preparations for the exam, etc. Positive stress and negative stress are the two main types of stress. Positive or acute stress is the stress that lasts for a short time when an individual's capabilities are sücient to meet the challenge. Negative or chronic stress is the stress that lasts for a long time when a challenge exceeds an individual's capabilities. Every individual is exposed to a stressful scenario at some point in life and will react accordingly. If an individual can cope with the stressful scenario, the next time a similar scenario arises, it will not create much stressful impact on an individual. Similarly, if an individual cannot cope with a stressful scenario and is repeatedly exposed to a similar scenario, it will lead to chronic stress on an individual. The brain triggers the stress response in reply to visual input from the ears, nose, and eyes whenever the body encounters a stressful scenario. This response is known as "¿ght-or-¿ight"

  • af Khushboo Salvi
    358,95 kr.

    Our world is quite beautiful, but the increasing use and improper disposal of the effluents from various industries is creating a lot in pollution in the environment. Anthropogenic activites have a large and profound impact on the environment. Water is highly essential for the life of living organisms, but it is becoming rapidly polluted as a result of effluents thrown in our natural resources. Chemicals, food and beverages, textile, pesticides and insecticides, dyeing and printing industries, etc. are among those causing water pollution. Dye effluents originating from production and application industries, etc. pose a major threat to surrounding ecosystems, because of their toxicity, recalcitrant nature and some time, potentially carcinogenic nature. Colored solutions containing dyes from industrial effluents may cause harmful effects on human beings, animals and plants, due to photosensitization and photodynamic damage. The color in the wastewater is an obvious indicator of water pollution due to dyes and pigments. The discovery of synthetic dyes has provided a wide range of colors, which are color fast and provides a wider color range and brighter shades. If this colored water can be treated to give colorless and less or non-toxic water, then it can be reused for various useful purposes. Dyes are becoming a major source of organic pollutants, which havepolluted the aquatic ecosystem. Synthetic dyes create a lot of problems due to their toxic nature and adverse effect on all forms of life and therefore, the removal of dyes from wastewater becomes necessary.

  • af Neha Deepak Saxena
    378,95 kr.

    Energy and environmental sustainability have triggered a worldwide interest in the field of the transportation network and industrial activity. Friction consumes approximately one-fifth of the total energy utilized in machinery systems. One-third of all energy is spent only on overcoming friction. Tribology is the science of lubrication, friction, and wear deals with the prevention of wear and tear of surfaces that move relative to one another under load. The term tribology was initially coined by Jost in 1966. The term comes from the Greek word tribos, which means "to rub," or "the science dealing with the rubbing of surfaces." Therefore, Friction and wear can also be termed lubrication science. Tribological interactions consume around 23% of the world's total energy usage. About 20% is used to overcome friction whereas 3% is used to re-fix worn components and spare equipment that have failed due to wear and wear- related problems. Energy losses due to friction and wear can be decreased by 40% in the long term (15 years) and by 18% in the short term by utilizing new surfaces, materials, and lubrication technologies. Reduction in wear and friction can enhance the life period of automobiles, machines, and other equipment around the world. In the long run, these savings would be equal to 1.4 percent of global GDP and about 8.7% of overall energy use on a worldwide scale. Hence, a vast variety of innovative technological solutions for reducing friction and wear had been advised, yet have not been widely adopted. Efficient coolants and lubricants are required by rising energy needs, precision manufacturing, miniaturization, nuclear regulations, and critical economies. Considering the negative impacts of friction and wear, lubricants play a key role in reducing these negative impacts on tribological processes. Lubricants also reduce the temperature in tribomechanical systems and enhance the performance of the system. All types of maintenance incorporate lubrication as a critical component of the whole procedure. In mechanical systems, consistent performance and energy-saving are the foremost demand for an eco-friendly lubricant. The physio-chemical relations between the molecules of lubricant, material surfaces, and the environment causes lubrication in the system [

  • af Chethan Gowda R K
    383,95 kr.

    Ground vibrations during earthquakes cause deformation and forces in the structures. The earthquake of the late 19th and early 20th centuries triggered several early advancements in science and engineering. The Bhuj earthquake of 2001 was the first instance of engineering causing the collapse of modern multi-storey buildings in India. The main principle used in the seismic design of the structure is capacity design. This principle allows the design of dissipative members, where the energy dissipation will be concentrated during a seismic event, while the non-dissipative members are protected from failure by providing them with a level of over strength such that they can resist the maximum force developed by the plasticization in the dissipative zone. Capacity design principles are; Global plastic mechanism of structure, Identification of dissipative and non-dissipative zones in the structure, and Provision of proper detailing to ensure maximum ductility for dissipative zones. In recent days because of rapid population growth, civil engineering structures like high-rise buildings, long-span bridges, huge industrial buildings, etc., have become more common. These structures are prone to damage during natural disasters and the consequences of failure are catastrophic. Hence some precautions need to be taken during the design of structures. However, these structures cannot be made completely safe against such calamities. Some extraordinary situations may cause damage to the structures or may lead to collapse. So it is important to make the structure safe even after extraordinary situations of natural disasters. Structures can be protected against natural hazards by using a lateral load resisting system or structural damping system or energy dissipating system. Steel LFRS has been used in all forms the structures to resist lateral forces like earthquake and wind forces. This system provides various advantages such as cost reduction, speed of construction, reduced foundation cost, and increased ductility of the structure. Using steel as construction material economic designs can be achieved that provide life safety for the occupants. Commonly used structural typologies for a steel building to resist the lateral forces are, Moment resisting frames (MRF), Braced frames, and Steel plate shear walls.

  • af Vaishali Vaibhav Hirlekar
    363,95 kr.

    In the modern world, fake information is a growing issue. Hoaxes and deliberate lies are disseminated through traditional media outlets or social platforms can be considered misleading/fake news. Fake information is produced and spread to deceive a person or an organization financially or politically. The transmission and dissemination of such misleading information could cause serious hazards; it could create threat to national security too. To mitigate the negative consequences of misleading news, a method to automatically recognise fake news must be developed. So far different measures have been put in order to identify misleading information. There are various ways to characterise fake news. An article of news that is purposefully and demonstrably untrue is known as fake news. The phrase "fake news" is used to describe misleading information that appears in mainstream media. Fake news is a term that is used to describe a variety of ideas, including rumour and misinformation. According to another definition, fake news is a type of misleading information released under the guise of being legitimate news commonly spread through news outlets or the internet with a goal to gain politically or financially, boost reading, and prejudice public opinion. Some made distinctions between different types of fake news, such as severe fabrications, massive hoaxes, and hilarious fakes. Today, social media has become a necessary component of daily life. Every day, we read numerous articles on social media. Some are real, but majority of the time they are found fake. The reason behind this is, every user is a self-publisher here, who doesn't verify the veracity of the information and people forward the erroneous or incorrect information further which is composed of fabricated articles which ultimately results in fake news, those stories are made up to sway readers' judgments or mislead them. Because of this misleading information are spread online more quickly than we can ever conceive, they have become more prevalent over the past several years on social platforms. Plenty of disinformation is spread about issues like politics, economics, and technological advancements.Misinformation, disinformation, malinformation are three distinct concepts that are frequently discussed in fake news. Despite being fake, misinformation is something which is being shared unintentionally. Disinformation is when a person deliberately spreads false information after knowing it to be real. Contrarily, information that is grounded in reality but does harm to an individual, group, or nation is referred to as malinformation.

  • af Amol Vilas Ganjare
    398,95 kr.

    The presence of the suspended solids is the most easily visible impurity in the liquid streams in several processes or the urban and industrial wastewater. These suspended particles may be either organic or inorganic in nature and can create problems in the process. The separation of the suspended solids using gravity settling to produce clarified overflow and thickened solids underflow has been used in the wastewater industry. The terms sedimentation or clarification are used to describe the process of gravity separation, depending on if the process focus is on the clarified water or the thickened solids, respectively. The standard gravity settler design often fails to achieve the desired separation efficiency due to deviation from the normal operation conditions. This happens due to changes in the upstream i.e., feed to the gravity settler. In this situation, the behavior of the flow and particles in the top region of the tank is of particular interest since the particles remain in the supernatant and are carried to the effluent reducing the separation efficiency of the gravity settlers. The separation of the particles in the gravity settler depends on several factors. One of the most important factors is the recirculation zones in the gravity settlers. The prediction of these recirculation zones is important in order to modify the design of the gravity settler for enhancing the separation efficiency.

  • af Annu Verma
    338,95 kr.

    Chickpea (Cicer arietinum L.) is a self-pollinated leguminous crop, positioning third after dry beans (Phaseolus vulgaris L.) and dry peas (Pisum sativum L.) in the world. Chickpea passes by a few basic names, including garbanzo beans, ceci beans, sanagalu, kala chana, and Bengal gram. (Gujrathi), and Chanaka (Sanskrit). The name Cicer is of Latin origin and is derived from the Greek word 'kikus' which means power or quality. It belongs to the Fabaceae family with subfamily Faboideae, having diploid (2n = 2x =16) chromosome number with a relatively small genome size of 740 Mbp. The genome of chickpea has been sequenced. The -738 Mb draft whole genome shotgun sequence of kabuli chickpea variety, CDC Frontier, which contains an estimated 28,269 genes and a few million hereditary markers. In view of seed size and shading, the developed chickpeas are of two primary sorts; desi chickpeas microsperma (seeds little in size, light to dim darker in shading, thick seed coat) and kabuli chickpeas macrosperma (seeds bolder than the Desi types, whitish-cream shading, slim seed coat). Desi chickpeas most noticeable, representing near 80% of worldwide generation. Desi assortments are developed principally in the Indian Subcontinent and in Ethiopia, Mexico and Iran while kabuli chickpeas are developed generally in Southern Europe, Northern Africa, Afghanistan, Pakistan and Chile and too little degree in the Indian Subcontinent. The chickpea is likewise developed in Australia, Canada and USA, principally for send out. The chickpea is probably originated from South East Turkey. Four centres of diversity were identified in the Mediterranean, Central Asia, the Near East and India as well as a secondary centre of origin in Ethiopia.

  • af Navjot Singh
    363,95 kr.

    A Quantum Dot Sensitized Solar Cell employing ternary or quaternary quantum dots represents an innovative approach to solar energy conversion. In this design, the utilization of three or four distinct types of quantum dots enables the solar cell to efficiently capture a broader spectrum of light. This enhanced light absorption is achieved through careful bandgap engineering, allowing the quantum dots to absorb different wavelengths of sunlight. The diverse composition of ternary/quaternary quantum dots contributes to improved charge separation and reduced recombination losses, fundamental factors in maximizing the efficiency of energy conversion. The advanced materials not only facilitate efficient electron injection into the semiconductor layer but also offer versatility in tailoring the optoelectronic properties of the solar cell. The multi-dot structure enhances light harvesting capabilities, increasing the likelihood of capturing photons across a wide range of the solar spectrum. This approach addresses limitations observed in traditional solar cells and holds promise for boosting overall performance in harnessing solar energy. The potential for multiple exciton generation within ternary/quaternary quantum dot systems adds an intriguing dimension to their functionality, offering the possibility of generating multiple electron-hole pairs from a single absorbed photon. This phenomenon has the potential to significantly enhance the quantum efficiency of the solar cell.

  • af Pradeep Kumar
    363,95 kr.

    At present, network and computer advances are rapidly changing. As an organization of networks with heterogeneous machine, the Internet has acquired boundless acknowledgment as a basic channel for a data transmission. The application of World Wide Web (or Web) with circulating registering expanding dramatically. The Internet gives a wide scope of administrations, including electronic mail and electronic document trade. The Web is unquestionably the most popular Internet service. Though the Web was once solely used to circulate data on supposed Web destinations, we presently see the Web being utilized to disperse progressive application. Initially distributing computing based on the client server computing architecture. In the client server computing, all the task is executed on large sever (service provider) on the bases of client request. Different rounds of trades between the customer and server may be necessary to fulfil the request, depending on the types of services sought. Client server model is less flexible and places undue burden on network infrastructure. Distributed computing is a type of computing in which the components are distributed among multiple networked computers that communicate and coordinate with one another in order to reach a common purpose. Three important technologies, Message passing interface, Remote procedure call (RPC) and Distributed object system, emerged as distributed computing progressed. Message passing interface is simple and literal. In this scheme, programs are located on the two ends and communicate by using message passing over network. Many Internet applications rely on basic message passing, such as FTP, the Web, and email.Mobile Agent (MA) is unique in relation to the overall cycle movement. As a rule, the overall interaction movement doesn't permit cycle to pick relocation time and moving objective itself. Anyway, Mobile Agent can be relocated whenever, and move to any place that it needs to go. Mobile agents are a code fragment that can move uninhibitedly among nodes in network, and can convey their own state and code from a host PC to target PC to execute relating task.

  • af Bhanu Priya
    398,95 kr.

    Semiconducting materials have been around since the early 19th century, when Michael Faraday discovered that, unlike pure metals, the electrical resistance in silver sulphide decreased as the temperature of the material was raised. From a more practical perspective, a semiconductor is a substance with electrical conductivity between that of an insulator and a metal, as the name suggests. There is a plethora of non- conducting features of semiconductors that have led to their uses in a wide variety of applications. After the development of the transistor device, silicon (Si) has become the most well-known semiconductor in the world. Without a doubt, the advent of the transistor in the 20th century was the single most important scientific event of the last two centuries, paving the way for the rapid development of technology in our modern world. In the last few decades, Transition Metal Oxide Semiconductors (TMOS) are a class of materials that have attracted significant attention in the field of electronics and optoelectronics due to their unique properties. These materials are composed of transition metal cations and oxygen anions, and can exhibit a wide range of electronic and optical behaviors, including band gap tuning, carrier density modulation, and photoresponse enhancement. TMOS are especially promising for applications such as solar cells, gas sensors, and electronic devices, due to their high carrier mobility, chemical stability, and abundance of raw materials. The diverse range of properties exhibited by TMOS makes them a promising avenue for developing new and advanced technologies in the field of materials science.

  • af Hardeep Singh
    373,95 kr.

    Rural households in developing countries face several types of risks, dominant among them is weather risk. Researchers ¿ag a host of factors responsible for greater vulnerability of farm households to weather risks in developing economies: (i) a signi¿cant proportion of sown area is under rainfed; (ii) rural households face substantial variation in their income across seasons and years due to vagaries of monsoon; (iii) households lack the adaptive capacity in terms of their access to information, institutions, infrastructure and ¿nances to cope with such weather risks; (iv) lastly, systemic weather shocks make informal risk sharing mechanisms infeasible for resource poor households and force many to sell their productive assets leading to dire consequences in terms of future outcomes. Further, it has been noted that weather patterns are becoming more and more unpredictable over time as the temperature is rising, rainfall is becoming more irregular and the frequency of extreme events such as droughts, ¿oods, heatwaves, etc. has been increasing over time. This further aggravates the problems posed by weather risks to agriculture and agriculture-based livelihoods. Examining the impacts of weather risks on agricultural productivity and further on overall economy of an agricultural household has generated a lot of interest. Besides examining the impacts of weather shock variables on agricultural productivity or pröts, several studies have also discussed about coping mechanisms employed by farmers to reduce agricultural income losses due to weather shocks. While examining the impacts of rainfall variations on crop yields, existing literature has focused on either level of or deviations in seasonal rainfall. They have ignored the role of uncertainty in monsoon onset in explaining losses in crop yields. The timing of monsoon arrival has many repercussions for agricultural house- holds and the biodiversity in a region. Changes in the pattern of arrival of monsoon over time have been documented in the literature. Several reasons have been cited to explain the changes in the pattern, the most important ones being variation in temperature due to climate change.

  • af S. N. Krushna Naik
    383,95 kr.

    The most essential aspects of human life, almost all over the world are energy and environment. On the side of energy, conventional sources that are meeting the demand will not last longer. Therefore, the immediate solution for such crises is utilizing non- conventional, alternative and renewable resources. These resources are to be explored for this purpose. Present day situation is that the earth is threatened due to enormous environmental pollution. The problem of pollution is increasing in recent years due to indiscriminate disposal of solids and liquids that are rich in organic contents. Of late, these organic wastes are considered for generation of energy by biotechnological means. It is obvious that there are two distinct benefits for such utilization of solid and liquid wastes. The practice will provide twin benefits; saving the environment from pollution menace and generating valuable energy. Agricultural residues, a great source of lignocellulosic biomass, are renewable, chiefly unexploited and inexpensive that can be used for the production of a greener energy. The threat of limited fossil fuel reserves depletion at an alarming rate and created a lot of problems for civilized world, immediately warranted concentrated on utilization of renewable resources for the production of a greener energy. This greener energy promised meeting the high energy demand of the world. Woody and non-woody plant biomass has been aimed at solving the present-day energy crises. The composition and proportion of cellulose, hemicelluloses and lignin constitute around 90% of dry-weight of plant biomass. The composition of lignocellulose is different, but limited plant residues has already available. Lignocellulose biomass is abundant in nature, and represents more than half of the organic matter produced globally by way of plant photosynthesis. Bioconversion of relatively inexpensive lignocellulosic biomass into biofuels and value-added products can effectively alleviate pressure of energy supply. Such conversion benefits sustainable development. Cellulose is a copious as well as ubiquitous natural polymer, being principal constituent of plant cell wall. Cellulase is recognized as one of the most important enzymes that are used in the pulp and paper processes. The application of cellulolytic enzymes in the bioleaching process is environment-friendly in nature and can improve the quality of pulp and paper. The addition of cellulase to the silage production process can improve the quality of silage fermentation as it can increase both the fiber degradation and the content of water-soluble carbohydrates (WSC) which are substrates for lactic acid bacteria

  • af Sabitha Banu A
    383,95 kr.

    Machine to Machine networks (M2M) is a technology where all the devices worldwide are interconnected through the Internet to exchange information between the devices without human interventions. M2M networks is considered to be the intelligent connection and communication between machines. Machine to Machine networks is used for remote monitoring of the machines, and it has laid the foundation for IoT technology. In M2M networks architecture, all the devices are connected through a powerful device called gateway to transmit information. Every node in the Machine-to- Machine networks collects a data through sensors, and the collected data is transmitted to other networks through M2M gateways. M2M networks encounters several attacks. Man in The Middle attack (MTTM) is one of the oldest and classic cyber-attacks that compromises all the CTA rules, and it belongs to both Active and Passive attack types. The attackers intend to steal sensitive information or credentials and eavesdrop the data to intercept or modify or delete the data. There are many different forms of threats, attacks, and vulnerabilities that may corrupt and compromise the system's security, which affects Confidentiality, Integrity, and Availability. Conceptually, Network attacks classified into active attacks that get access to system resources and passive attacks that do not. Active attacks are those in which the attacker acquires unlawful access to the system's resources. However, MITM attack's belong to both active and passive attacks.

  • af Ansha K K
    378,95 kr.

    Over the past few years, there has been an incredible rise in demand for wireless communication technology that supports high data speeds, exceptional spectrum efficiency and excellent broad band fading reduction. Due to changes in how information is generated, shared, and used by today's society, data traffic has rapidly expanded in the wireless realm. There won't be enough spectrum left for wireless technology to manage these high data rates, hence new spectral bands will be needed. This has now come up with the usage of the Terahertz (THz) frequency band (0.1-10 THz). This band is often known as the THz gap since the technology for generating and detecting terahertz radiation is still in its infacy. It is the frequency range between millimeter-wave and infrared that has received the least attention to research fraternity. In order to satisfy the need for 5G and B5G (Beyond 5G) networks, the spectrum between 0.1 and 10 THz is regarded as a scientific innovation. The development of miniaturized antennas in these bands of frequency leads to the miniaturization of the antenna into the range of millimeters or micrometers. For instance, there is an increase in wireless data by 200% for every three years. This in turn, is making the wireless data rate to reach the point where it is almost comparable with the wired communication systems. The antenna required for faster connectivity anywhere at any time has grown in tandem with this trend. Applications for terahertz technology include static point-to-point networking in server rooms or high-performance computer, as well as short-range ultra- broadband accelerated wireless connectivity. The large cable connections in server rooms or supercomputers might be replaced by such an antenna. For the transmission of signals, researchers are looking for SubTHz and THz frequency bands (0.1 THz-10 THz), since very high data rates within a confined duration are becoming the major goal in the emerging communication field. The SubTHz frequency band (0.1 THz) experiences less reduction in signal strength due to atmospheric factors like rain and fog compared to the upper band of THz band, which leads to a spectrum with increased capacity for data transmission. Terahertz frequency communication links are crucial in situations that demand high data rates over short distances. With the help of these frequencies, rapid data transmission can be achieved within a span of 10 meters. There are presently 23 billion devices linked to the internet, and it is predicted that there will be 75 billion devices online by the year 2025.

Gør som tusindvis af andre bogelskere

Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.