目 录CONTENT

文章目录

A review of deep learning for beginners

ByteNews
2023-01-05 / 0 评论 / 0 点赞 / 7,647 阅读 / 11,672 字 / 正在检测是否收录...
温馨提示:
本文最后更新于 2023-01-05,若内容或图片失效,请留言反馈。部分素材来自网络,若不小心影响到您的利益,请联系我们删除。

A review of deep learning for beginners

Deep learning is one of the latest trends in machine learning and artificial intelligence research. It's also one of the most popular trends in science today. Deep learning methods revolutionize computer vision and machine learning. New deep learning technologies are emerging that go beyond the most advanced machine learning and even existing deep learning technologies. In recent years, the world has made many major breakthroughs in this field. Because deep learning is developing so fast, it is difficult to follow up on its progress, especially for new researchers. In this article, we will briefly discuss recent advances in deep learning.

1. Introduction

The term "deep learning" (DL) was first introduced into machine learning (ML) in 1986 and was later applied to artificial neural networks (ANN) in 2000. Deep learning methods consist of multiple layers to learn data features with multiple levels of abstraction. The DL approach allows a computer to learn complex concepts from relatively simple concepts. For artificial neural networks (ANNS), deep Learning (DL) (also known as Hierarchical Learning) refers to the precise allocation of credit across multiple computing stages to transform aggregate activation in the network. In order to learn complex functions, deep architectures are used for multiple levels of abstraction, i.e., nonlinear operations; ANNs, for example, has many hidden layers. In a more accurate summary, deep learning is a subfield of machine learning that uses multiple layers of nonlinear information processing and abstraction for supervised and unsupervised feature learning, representation, classification, and pattern recognition.

Deep learning, or representational learning, is a branch or subfield of machine learning. Most people believe that modern deep learning methods have been developed since 2006. This article is a review of the latest deep learning technologies and is recommended for researchers who are about to enter the field. This article includes the basic idea, the main method, the latest development and the application of DL.

Review papers are very beneficial, especially for new researchers in a particular field. If an area of research has great value in the near future and related applications, it is often difficult to keep track of the latest progress in real time. Now, scientific research is an attractive career because knowledge and education are easier to share and access than ever before. The only normal assumption about a trend in technology research is that it will improve in many ways. An overview of a field a few years ago may now be out of date.

Given the popularity and promotion of deep learning in recent years, we give a brief overview of deep learning and neural networks (NN), along with its major advances and major breakthroughs over the years. We hope that this article will help many novice researchers in this field gain a comprehensive understanding of recent deep learning research and techniques, and guide them to the right way to start. At the same time, we hope to honor the top DL and ANN researchers of our time with this work: Geoffrey Hinton, Juergen Schmidhuber, Yann LeCun, Yoshua Bengio, and many others, Their research built modern artificial intelligence (AI). It is also critical for us to follow up on their work to track the best current DL and ML research advances.

In this paper, we first briefly describe the past research papers and study the models and methods of deep learning. We will then begin to describe the latest developments in this area. We will discuss deep learning (DL) methods, deep architectures (i.e., deep neural networks (DNN)), and deep generation models (DGM), followed by important regularization and optimization methods. In addition, two short sections summarize the open source DL framework and important DL applications. We will discuss the state and future of deep learning in the last two chapters, namely discussion and conclusion.

2. Related research

Over the past few years, there have been a number of review papers on deep learning. They describe DL methods, methodologies, their applications and future research directions in a good way. Here, we briefly present some excellent review papers on deep learning.

Young et al. (2017) discuss the DL model and architecture, primarily for natural language processing (NLP). They demonstrate DL applications in different NLP domains, compare DL models, and discuss possible future trends.

Zhang et al. (2017) discuss the current best deep learning techniques for both front-end and back-end speech recognition systems.

Zhu et al. (2017) reviewed the latest progress of DL remote sensing technology. They also discussed the open source DL framework and other technical details of deep learning.

Wang et al. (2017) described the evolution of deep learning models in a chronological manner. This short article briefly introduces the model and the breakthrough in DL research. In this paper, we take an evolutionary approach to understand the origin of deep learning, and provide insights into neural network optimization and future research.

Goodfellow et al. (2016) discussed deep networks and generation models in detail, and summarized recent DL research and applications from the perspective of machine learning (ML) fundamentals and the advantages and disadvantages of deep architectures.

LeCun et al. (2015) outlined the deep learning (DL) model from convolutional neural networks (CNN) and recursive neural networks (RNN). They describe DL from a representational learning perspective, showing how DL techniques work, how they can be successfully used in a variety of applications, and how unsupervised learning (UL) based learning can be used to predict the future. They also point to major advances in DL in the bibliography.

Schmidhuber (2015) gave an overview of deep learning from CNN, RNN and deep reinforcement learning (RL). He highlights RNNS for sequence processing, while pointing out the limitations of the basic DL and NN, and tips for improving them.

Nielsen (2015) describes the details of neural networks with code and examples. He also discusses deep neural networks and deep learning to some extent.

Schmidhuber (2014) discussed the history and progress of neural networks based on time series, classification by machine learning method, and the use of deep learning in neural networks.

Deng and Yu (2014) describe deep learning categories and techniques, as well as DL applications in several fields.

Bengio (2013) gives a brief overview of DL algorithms, namely supervised and unsupervised networks, optimization and training models, from the perspective of representation learning. He focuses on the many challenges of deep learning, such as scaling algorithms for larger models and data, reducing optimization difficulties, and designing efficient scaling methods.

Bengio et al. (2013) discussed representation and feature learning, namely deep learning. They explore various approaches and models in terms of application, technology, and challenge.

Deng (2011) gives an overview of deep structured learning and its architecture from the perspective of information processing and related fields.

Arel et al. (2010) provides a brief overview of DL technology in recent years.

Bengio (2009) discusses deep architecture, that is, neural networks and generative models of artificial intelligence.

All recent papers on deep learning (DL) have discussed deep learning priorities from multiple perspectives. This is necessary for DL researchers. However, DL is currently a booming field. Many new technologies and architectures have been proposed since the recent DL overview papers were published. In addition, previous papers have studied from different angles. Our paper is aimed primarily at learners and novices entering the field. To this end, we will strive to provide a foundation and clear concept of deep learning for new researchers and anyone interested in this field.

3. Latest progress

In this section, we will discuss the major deep learning (DL) methods that have recently evolved from machine learning (ANN) and artificial neural networks (Ann ), the most common form of deep learning.

3.1 Evolution of Deep Architecture

Artificial neural networks (ANN) have come a long way and have led to other deep models as well. The first generation of artificial neural networks consisted of simple perceptron nerve layers and could only perform a limited number of simple calculations. The second generation uses backpropagation, updating the weight of neurons according to the error rate. Then support vector machines (SVM) surfaced to overtake ANN for a while. To overcome the limitations of back propagation, Restricted Boltzmann machines (RBMS) have been proposed to make learning easier. At this time, other technologies and neural networks also appeared, such as feedforward neural network (FNN), convolutional neural network (CNN), cyclic neural network (RNN), deep belief network, autoencoder and so on. Since then, ANN has been improved and designed in different ways for a variety of uses.

Schmidhuber (2014), Bengio (2009), Deng and Yu (2014), Goodfellow et al (2016), Wang et al (2017) on deep neural networks (DNN) The evolution and history of Deep Learning (DL) are outlined in detail. In most cases, a deep architecture is a multi-layer nonlinear repetition of a simple architecture, which allows highly complex functions to be derived from the input.

4. Deep Learning methods

Deep neural networks have achieved great success in supervised learning. In addition, deep learning models have been very successful in unsupervised, mixed, and reinforcement learning.

4.1 Deeply Supervised learning

Supervised learning is applied when data labeling, classifier classification, or numerical prediction is involved. LeCun et al. (2015) offer a concise explanation of supervised learning methods and the formation of deep structures. Deng and Yu(2014) mention and explain many deep networks for supervised and blended learning, such as deep stack networks (DSN) and their variants. The research of Schmidthuber(2014) covers all neural networks, from the early neural networks to the recently successful convolutional neural networks (CNN), cyclic neural networks (RNN), long and short term memory (LSTM) and their improvement.

4.2 Deep Unsupervised Learning

When the input data is not labeled, the unsupervised learning method can be applied to extract features from the data and classify or label them. LeCun et al. (2015) Predicting the future of unsupervised learning in deep learning. Schmidthuber(2014) also describes the neural network of unsupervised learning. Deng and Yu(2014) briefly introduce the deep architecture of unsupervised learning and explain the deep autoencoder in detail.

4.3 Deep Reinforcement Learning

Reinforcement learning uses punishment and reward systems to predict the next step in a learning model. This is mostly used in games and robots to solve mundane decision-making problems. Schmidthuber(2014) describes the progress of deep learning in reinforcement learning (RL), and the application of deep feedforward neural network (FNN) and cyclic neural network (RNN) in RL. Li(2017) discusses the Deep Reinforcement Learning (DRL), its architecture (e.g. Deep Q-Network , DQN) and its applications in various fields.

Mnih et al. (2016) proposed a DRL framework for DNN optimization using asynchronous gradient descent.

van Hasselt et al. (2015) proposed a DRL architecture using deep neural networks (DNN).

5. Deep Neural network

In this section, we will briefly discuss deep neural networks (DNN), and their recent improvements and breakthroughs. The function of neural networks is similar to that of the human brain. They are mostly made up of neurons and connections. When we talk about deep neural networks, we can assume that there are quite a few hidden layers that can be used to extract features from input and compute complex functions. Bengio(2009) explains deep structured neural networks, such as convolutional neural networks (CNN), autoencoders (AE) and their variants. Deng and Yu(2014) describe in detail some neural network architectures such as AE and its variants. Goodfellow et al. (2016) introduced deep feedforward networks, convolutional networks, recursive networks and their improvement. Schmidhuber(2014) mentions a complete history of neural networks from early neural networks to recent successful techniques.

5.1 depth autoencoder

Autoencoder (AE) is neural network (NN), where the output is the input. AE takes the original input, encodes it as a compressed representation, and then decodes it to reconstruct the input. In deep AE, low hiding layer is used for encoding, high hiding layer is used for decoding, and error backpropagation is used for training.

5.1.1 Variational autoencoder

Variational encoder (VAE) can be regarded as a decoder. VAE is built on standard neural network and can be trained by random gradient descent (Doersch,2016).

5.1.2 Multi-layer noise reduction autoencoder

In early autoencoders (AE), the dimension of the encoding layer was smaller (narrower) than that of the input layer. In multi-layer noise reduction autoencoder (SDAE), the encoding layer is wider than the input layer (Deng and Yu, 2014).

5.1.3 Transform autoencoder

Deep autoencoders (DAE) can be transformable, that is, features extracted from multi-layer nonlinear processing can be changed according to the needs of learners. The transform autoencoder (TAE) can use either an input vector or a target output vector to apply the transformation invariance property to steer the code in the desired direction (Deng and Yu,2014

0

评论区