Markedets billigste bøger
Levering: 1 - 2 hverdage

Bøger i Machine Learning: Foundations, serien

Filter
Filter
Sorter efterSorter Serie rækkefølge
  • af Thomas Bartz-Beielstein
    664,95 kr.

    This book deals with the exciting, seminal topic of Online Machine Learning (OML). The content is divided into three parts: the first part looks in detail at the theoretical foundations of OML, comparing it to Batch Machine Learning (BML) and discussing what criteria should be developed for a meaningful comparison. The second part provides practical considerations, and the third part substantiates them with concrete practical applications.The book is equally suitable as a reference manual for experts dealing with OML, as a textbook for beginners who want to deal with OML, and as a scientific publication for scientists dealing with OML since it reflects the latest state of research. But it can also serve as quasi OML consulting since decision-makers and practitioners can use the explanations to tailor OML to their needs and use it for their application and ask whether the benefits of OML might outweigh the costs.OML will soon become practical; it is worthwhile to get involved with it now. This book already presents some tools that will facilitate the practice of OML in the future. A promising breakthrough is expected because practice shows that due to the large amounts of data that accumulate, the previous BML is no longer sufficient. OML is the solution to evaluate and process data streams in real-time and deliver results that are relevant for practice.In addition to this book, interactive Jupyter Notebooks and further material about OML are provided in the GitHub repository (https://github.com/sn-code-inside/online-machine-learning). The repository is continuously maintained, so the notebooks may change over time.

  • af Alexander Jung
    660,95 kr.

  • af Yi Mei, Mengjie Zhang, Fangfang Zhang & mfl.
    1.673,95 kr.

  • af Yiqiang Chen
    822,95 kr.

    Transfer learning is one of the most important technologies in the era of artificial intelligence and deep learning. It seeks to leverage existing knowledge by transferring it to another, new domain. Over the years, a number of relevant topics have attracted the interest of the research and application community: transfer learning, pre-training and fine-tuning, domain adaptation, domain generalization, and meta-learning. This book offers a comprehensive tutorial on an overview of transfer learning, introducing new researchers in this area to both classic and more recent algorithms. Most importantly, it takes a ¿student¿s¿ perspective to introduce all the concepts, theories, algorithms, and applications, allowing readers to quickly and easily enter this area. Accompanying the book, detailed code implementations are provided to better illustrate the core ideas of several important algorithms, presenting good examples for practice.

  • af Liang Feng
    1.957,95 kr.

    A remarkable facet of the human brain is its ability to manage multiple tasks with apparent simultaneity. Knowledge learned from one task can then be used to enhance problem-solving in other related tasks. In machine learning, the idea of leveraging relevant information across related tasks as inductive biases to enhance learning performance has attracted significant interest. In contrast, attempts to emulate the human brain¿s ability to generalize in optimization ¿ particularly in population-based evolutionary algorithms ¿ have received little attention to date. Recently, a novel evolutionary search paradigm, Evolutionary Multi-Task (EMT) optimization, has been proposed in the realm of evolutionary computation. In contrast to traditional evolutionary searches, which solve a single task in a single run, evolutionary multi-tasking algorithm conducts searches concurrently on multiple search spaces corresponding to different tasks or optimization problems,each possessing a unique function landscape. By exploiting the latent synergies among distinct problems, the superior search performance of EMT optimization in terms of solution quality and convergence speed has been demonstrated in a variety of continuous, discrete, and hybrid (mixture of continuous and discrete) tasks. This book discusses the foundations and methodologies of developing evolutionary multi-tasking algorithms for complex optimization, including in domains characterized by factors such as multiple objectives of interest, high-dimensional search spaces and NP-hardness.

Gør som tusindvis af andre bogelskere

Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.