[Hydropower and Water Resources] Gao Naiyun: Professor, doctor, doctoral tutor.Published on：2019-06-25 | Updated on：2019-06-25
Professor Gao Naiyun, a well-known scholar, is welcome to be the editor-in-chief of this journal.
The honor introduction of Professor Gao Naiyun
In 1970, she entered the Tongji University for water supply and drainage engineering major，in 1974, he graduated and taught at the university. He received a master's degree in engineering from Tongji University in 1994 and a doctorate in engineering from Tongji University in 1999.
In 1986, she went to the Asian Institute of Technology (AIT) in Thailand to participate in the computer application class for 3 months. 2001-2002 was a visiting professor at the Gifu University of Japan for one year. He has visited academic exchanges in France, Britain, Germany, Australia, New Zealand, Italy, the Netherlands, Singapore,Thailand, Malaysia and other countries.
She has served as a teaching assistant, lecturer and associate professor at Tongji University. He used to be director of the water supply and drainage teaching and research section of Tongji University, secretary of the party branch, and director of the water technology research institute. He is also the deputy director of the Water Supply and Drainage Engineering Professional Steering Committee of the Ministry of Construction; the member of the Civil Engineering Discipline Steering Committee of the Ministry of Construction; the Member of the Building Water Supply and Drainage Committee of the China Association for Engineering Construction Standardization; the Water Supply and Drainage Committee of the Construction Water Supply and Drainage Committee and the Fire Branch. He has been concurrently serving as editor of the "China Water Supply and Drainage", "Water Supply and Drainage", "Environmental Pollution and Prevention", "Water Resources Protection" and other magazines; national survey and design registered engineer basic test expert group experts and water network consulting experts; member of the International Water Association (IWA).
She presided over the “Eleventh Five-Year” national major water project, the National Support Program, and
The first and second finishers won the second prize of Shanghai science and technology progress award in 2011 and 2014 respectively; the third completed person won the first prize of provincial and ministerial level teaching. In 2011, he was awarded the outstanding party member of the Shanghai education commission system.Load More
[Journal of Autonomous Intelligence] Shaping the Next Generation Pharmaceutical Supply Chain Control Tower with Autonomous IntelligencePublished on：2019-06-18 | Updated on：2019-06-18
Summary: Technologies such as AI can play a valuable role in the decision-making processes within a CT environment.
Today’s pharmaceutical distributors are faced with several key strategic priorities. These include retaining and managing operating margin, acquiring business agility and controlling pharmaceutical counterfeiting and fraud. Overall, the control tower (CT) concept can transform how healthcare and pharma industries lead and manage their supply chains by shifting to a model in which real-time information gathering, analysis, and decision making are possible.
In essence, a CT is a center of excellence that facilitates a coordinated network to continuously manage complexity and execute at levels that cannot otherwise be managed easily by humans. It must provide fundamental capabilities to enable the levels of visibility and awareness to achieving this mission.
Matthew Liotine made a research on the next generation pharmaceutical supply chain control tower, and the research result published on the journal of autonomous intelligence.
This paper summarizes the findings of an industry panel study evaluating how new Autonomous Intelligence technologies, such as artificial intelligence and machine learning, impact the system and operational architecture of supply chain control tower (CT) implementations that serve the pharmaceutical industry. Such technologies can shift CTs to a model in which real-time information gathering, analysis, and decision making are possible. This can be achieved by leveraging these technologies to better manage decision complexity and execute decisions at levels that cannot otherwise be managed easily by humans.
Overall, a CT serves as a command center to enable a firm to act more closely with suppliers and be more proactively provide customer service, and ultimately improve profitability. Technologies such as AI can play a valuable role in the decision-making processes within a CT environment. Such decision-making requires learning based on high-quality transaction-based data, less tangible data and prior human-based operator decision behavior patterns. The supply chain professionals of the future will evolve from the management of exceptions to creating more strategic value through new ways of working. Firms might have to reorganize or reconstitute functional roles within supply chain and perhaps information technology to accommodate machine-based decision-making.
For more information, please visit:Load More
[Journal of Autonomous Intelligence] Article: Toward Global Complex Systems Control The Autonomous Intelligence ChallengePublished on：2019-06-18 | Updated on：2019-06-18
In today industry the demand for higher performances under economic and environmental constraints cannot be satisfied by simple upgrade of previous components. New phenomena related to handling systems heterogeneity and number of components have recently opened a broad domain of investigations on phenomena related to this new structure. Because both power and information fluxes are now concerned, different problems are identified concerning internal system coordination and control, information flux handling and communication between a networked cluster of systems. Analysis of passage to complex stage shows that previous steps defined for simpler system situation have to be reassessed for meeting the new requirements imposed by complex status. In particular for power flux it is mandatory that asymptotic stability be satisfied inside a robustness ball of at least the size of system uncertainty. So, following bottom-up approach described here, classical trajectory system control should be upgraded to more adapted task control.
Michel Cotsaftis made a research on the toward global complex systems control the autonomous intelligence challenge, and the research result published on the journal of autonomous intelligence.
In this paper,the construction of new controller is made possible in two steps by developing an explicit trajectory control of functional nature, which is asymptotically stable and robust enough to cover the manifold of possible trajectories. Second, by introducing the concept of “useful” information, a task functional expressed in terms of system parameters is set up which defines compatible trajectory manifold. From them a double loop is written giving the system the possibility to accomplish the task for any allowed trajectory by determining its path from its own elements.
The result turns out to better perform than (also explicit) extension of Popov criterion to more general nonlinear monotonically upper bounded potentials bounding system dynamics discussed here. An interesting observation is that when correctly amended as proposed here, complex systems are not as commonly believed a counterexample to reductionism so strongly influential in Science with Cartesian method supposedly only valid for complicated systems.
For more information, please visit:Load More
[Journal of Autonomous Intelligence] 10.32629/jai.v2i1.37Published on：2019-06-18 | Updated on：2019-06-18
The precipitation will greatly affect people’s life and production. If there is too much rainfall, it will lead to flash flooding and natural disasters, even though will cause severe economic losses and inconveniences to human life. In recently years, there are many methods used in hydrologic forecasting. Besides lots of mechanism models, data driving models are popular in recently years. They can be divided into two classification: probability statistics method and time series analysis method.
Recently, using combined methods to predict time series is developing fast in different fields. The idea to hybrid data preprocessing method, forecasting model and optimization method to predict rainfall is attractive. Weide Li and Juan Zhang made an research of the innovated model about intelligent algorithm for rainfall forecasting,The survey result published on the journal of autonomous intelligence.
In this paper, a novel hybrid model to forecast rainfall is developed by incorporating singular spectrum analysis (SSA) and dragonfly algorithm (DA) into support vector regression (SVR) method. Firstly, SSA is used for extracting the trend components of the hydrological data. Then, SVR is utilized to deal with the volatility and irregularity of the precipitation series. Finally, the parameter of SVR is optimized by DA. The proposed SSA-DA-SVR method is used to forecast the monthly precipitation for Songbai, Panshui, Lanma and Jiulongchi stations. To validate the efficiency of the method, four compared models, DA-SVR, SSA-GWO-SVR, SSA-PSO-SVR, SSA-CS-SVR are established.
Compared with DA-SVR, SSA-GWO-SVR, SSA-PSO-SVR, SSA-CS-SVR models, the proposed hybrid model can effectively improve the prediction accuracy for month average precipitation. Thus, the model can be used on rainfall forecasting in the future. In addition, as a prediction model, it can also be applied in wind speed and power load forecasting.
For more information, please visit:Load More
[Journal of Autonomous Intelligence] Article: Flame Recognition in Video Images with Color and Dynamic Features of FlamesPublished on：2019-06-18 | Updated on：2019-06-18
Real-time detection and early warning of fire is an important approach to alleviating the threats from fire hazards. Since fire often occurs randomly and a fire scene is usually complicated, traditional fire detection methods are often unable to detect fires and issue warnings in the early stages of fires. Recently, with the abundance of surveillance video cameras, fire detection technology based on video has become an important approach for the early detection of fire. Such detection methods analyze the features of video images and recognize potential occurrences of flames, fires can thus be recognized and under control before they develop into disasters. Due to its ability to detect fires in their early stages, video based fire detection technology has attracted the attention of many researchers in the areas of fire safety and a large number of methods have been developed to detect the occurrence of flames by analyzing sequences of video images.
Color features are the most important features of flames and have been extensively used in methods for video based fire detection. video based flame detection has become an important approach for early detection of fire under complex circumstances. However, the detection accuracy of most existing methods remains unsatisfactory.
Jiaqing Chen and Xiaohui Mu with their team made an research of the flame recognition ,The survey result published on the journal of autonomous intelligence.
In this paper, we develop a new algorithm that can significantly improve the accuracy of flame detection in video images. The algorithm segments a video image and obtains areas that may contain flames by combining a two-step clustering based approach with the RGB color model. A few new dynamic and hierarchical features associated with the suspected regions, including the flicker frequency of flames, are then extracted and analyzed. The algorithm determines whether a suspected region contains flames or not by processing the color and dynamic features of the area altogether with a BP neural network.
Their testing results show that, the approach is robust and able to identify the presence of flames under complex circumstances where other interference sources may also exist. In addition, their approach is able to accurately identify flames that are under control and in safe conditions.
For more information, please visit:Load More
[Journal of Autonomous Intelligence] Article: Learning hand latent features for unsupervised 3D hand pose estimationPublished on：2019-06-18 | Updated on：2019-06-18
Summary:Method LDTM can accurately estimate a hand pose based on the prior knowledge of the hand representation.
Hand pose estimation from depth is the first step for several human-computerinteraction applications. It has been widely applied to human-machine interaction(HMI) since it provides the possibility for future multi-touchless interfaces. Anaccurate hand pose estimation provides a natural way of interaction between humanand virtual space that achieves greater user experience. Different from theconventional human-machine interactions which are limited to 2D plane display, andwhich are only suited where users sit behind the computing devices, hand poseestimation offers 3D user interaction without direct contact with the computing device.This provides a possibility for the new interface leading towards seamlesshuman-computer interaction.
However, hand pose estimation is still a difficult task owing to some challengesthat a human hand possesses. The hand is very dextrous, has many degrees offreedom. Similarly, fingers have high self-similarities and severe self-occlusion.The input depth image is accompanied by the large amount of noise which willprobablymislead the poseestimator and distort the output results.
Jamal Firmat Banzi, Isack Bulugu and Zhongfu Ye conducted an research on the learning hand latent features for unsupervised 3D hand pose estimation.The survey result published on the journal of autonomous intelligence.
In this paper, the team present a novel approach to model hand topology based on LDTM which captures hand latent features and observable features to construct hand joint representation. Then they integrate LDTM with the deep PCM using a data-independent method to encode the hand representations and map with the decoded hand depth map. Finally, the multi-layered convolutional neural network based on deep PCM was utilized to regress a 3D pose space based on the joints location.
As a result, their system can accurately estimate a hand pose based on the prior knowledge of the hand representation. This confers robust and reliable hand pose estimation system that can achieve greater user experience.
For more information, please visit:Load More
[Journal of Autonomous Intelligence] Article: Security Challenges and Social for the Heterogeneity of IoT ApplicationsPublished on：2019-06-15 | Updated on：2019-06-16
Content:The Internet of Things (IoT) integrates datum from the virtual world and the physical world. It involves smart objects that understands and makes response in a variety of industrial, commercial and home environment. As the Internet of Things expands the number of connected devices, cyber attackers are possible to enter the physical world in which we live because they have caught the security vulnerabilities in these new systems. New security issues arise through the heterogeneity of IoT applications and devices as well as its large-scale deployment.
Abuse of IoT devices may result from malicious individuals or malware. Any given type of security measures to access control management can be compromised. Managing security with multi-layered approaches makes abuse more difficult because of the need to break some independent layers of security.
Deepak Choudhary conducted a survey on the security of heterogeneity of IoT applications, which was published in the journal Autonomous Intelligence. The novelty proposed in this study involves the development of a practical, low-cost, multi-level security approach for IoT applications that combines physical proximity registration of IoT devices, communication encryption between mobile devices and IoT devices, geographic location verification. , embedded safety operation control and exception/confirmation reports. The low cost of the method is achieved by the use of commonly available techniques combined in a novel manner.
To learn more about IoT security, please post:Load More
[Journal of Autonomous Intelligence] Article: PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric CalibrationPublished on：2019-06-15 | Updated on：2019-06-15
Content:Feature detection and tracking rely on image gray value information, which is a very important process of visual inertial ranging (VIO). The tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high-contrast lighting conditions, images taken by auto-exposure cameras often change with exposure time; gray values of the same features in the image change between frames, which makes the feature detection and tracking process very large The challenge; and the nonlinear camera response function and lens attenuation further exacerbate this problem. However, few VIO methods make full use of photometer camera calibration and discuss the effects of photometric calibration on VIO.
Yao Xiao, Xiaogang Ruan and Xiaoqing Zhu performed inertial measurements on the robust vision of the PC-VINS-Mono for photometric calibration, which was published in the journal Autonomous Intelligence.
In this measurement, the research team used the photometric response function, vignetting and exposure time to propose a robust visual inertial measurement of PC-VINS-Mono with photometric calibration. The proposed algorithm can be understood as an extension of the VENS-Mono with photometric calibration. With this extension, the proposed algorithm can be used in high contrast lighting conditions with automatic exposure cameras where the exposure time varies from frame to frame, which violates the brightness consistency assumption. The team evaluated the algorithm using the TUM VI dataset, which included different sequences under different lighting conditions. Comparative experiments show that the performance of PC-VINS-Mono is significantly improved by photometric calibration. For cameras with unknown response functions and lens attenuation coefficients, experimental results show that even with only exposure time calibration, the performance of our algorithm will increase in most cases. The experimental team skipping the CLAHE step showed that the performance of the algorithm was degraded, which confirmed that the CLAHE algorithm must be applied to improve the contrast of the image before feature detection, because photometric calibration may reduce the contrast of the image.
Click here to read more:Load More
[Journal of Autonomous Intelligence] Article: A new way to improve Arabic POS tagsPublished on：2019-06-15 | Updated on：2019-06-15
Content:An important task of natural language processing is part of speech tagging. We have a lot of research on Arabic, but due to the complexity of these tasks and the characteristics of Arabic, these performances have not reached the standard level. Any part-of-speech tagger is designed to assign a word class to each word in the text, or to match a word in the text, for example, in an Arabiclanguagea wordcanbea verb, noun, or particle. Therefore, the Arabic POS-Tagger should match each word in the text with a word indicating its category (verb, noun, or particle). Labels added to any text are filled with several purposes and used to determine language and analytical word frequency, syntactic structure, and other analysis.
Mohamed Labidi proposed a new combination of improved Arabic POS tags, which was published in the journal Autonomous Intelligence. In this work, Mohamed Labidi studied the combination of two different methods of Arab POS-Taggers. The first is based on the maximum value of entropy; the second is based on statistics/rules. In addition, Mohamed Labidi has added a knowledge-based approach to annotating Arabic particles. Mohamed Labidi's vision increased the accuracy of the marker.
Click here to read more:Load More
[Journal of Autonomous Intelligence] Article: Monitoring On Guardrails To Afford Road Safety Using IotPublished on：2019-06-15 | Updated on：2019-06-15
Content: As the number of vehicles increases, accidents on the highways are on the rise. According to statistics of all traffic accidents, 55 percent of accidents occurred on expressways, of which about 30 percent involved vehicle guardrails. One-third of the annual fatal accidents are caused by collisions with car guardrails. Therefore, there is an urgent need to develop an effective road accident monitoring solution. Efforts are ongoing to monitor and protect roads and bridges, thereby extending the life of the infrastructure and increasing the safety of drivers as well as providing diverse security to reduce the number of accidents and injuries.
Vignesh Venkataraman conducted a study about the impact of IoT monitoring barriers on road safety, the results of which were published in the journal Autonomous Intelligence. The study proposes a new design which is capable of detecting an accident in a very short period of time and sending basic information to the emergency center in seconds, covering geographic coordinates as well as the time and Angle of the accident. This alert message will be sent to the rescue team in a short time, which will help save precious lives.
Click on the link to read the full article:Load More
[Journal of Autonomous Intelligence] Article: Intelligent fish tank based on WiFi modulePublished on：2019-06-15 | Updated on：2019-06-15
Content:With the rapid development of the Internet in recent decades, all walks of life are inseparable from the application of the Internet. The Internet helps people improve their quality of life and work efficiency. In order to meet the growing market demand and continuously meet the process of social development, the intelligent and convenient Internet of Things （IOT） has emerged on the basis of the Internet. Research and development of Internet of Things (IOT) applications, both network and mobile, is increasing. The Internet of Things is a network of advanced objects with unique identities, each of which is interconnected or connected to a remote server to provide more efficient services . The Internet of Things is a platform where daily devices become more intelligent, daily processing becomes intelligent, and daily communication becomes informative. While the Internet of Things is still looking for its own form, its impact has begun to make incredible progress as a universal solution for connecting scenarios.
Feng Yan and Fuyao Wang did a study on smart fish tank, which was published in the journal Autonomous Intelligence. This study introduces the intelligent fish tank embedded HC-SR04 ultrasonic ranging module and DS18B20 temperature sensor with STC89C52 as the control core. The system can remotely control and collect temperature and water level data in the fish tank via the WiFi module (ESP8266-01). When the water level is below the default, the system will adjust by adding water to the tank. At the same time, people can also access data and control tanks at any time. The microcontroller is connected to the Internet via a WiFi module. With the help of MicroPython firmware, the python program is compiled in this WiFi module to connect to the WiFi at home and provide data transfer capabilities. Android smartphones can connect to the system via WiFi and send commands. In this way, the fish tank can be remotely controlled to ensure the stability of the water temperature and water level in the tank.Load More
[Building Technology Research] Professor Yu BoPublished on：2019-06-10 | Updated on：2019-06-18
Professor Yu Bo, a well-known scholar, is welcome to be the editor-in-chief of this journal.
The honor introduction of Professor Yu Bo
In 2012, he was awarded the Excellent Doctoral Dissertation of Guangxi Zhuang Autonomous Region, in 2015, he was selected as the “Guangxi Scholarship for Innovation and Entrepreneurship Enhancement Program” and “Bosch Youth Innovative Talents Training and Incentive Funding Program”. In 2017, he was selected as one of the first batch of “Thousands of Young and Middle-aged Teachers in Guangxi Higher Education”, he won the second and third prizes of the teaching achievements of Guangxi Zhuang Autonomous Region in 2017, and won the first prize of Guangxi Science and Technology Progress Award twice in 2013 and 2011.
It mainly focuses on the quantitative analysis and design of concrete structure durability, the deterioration mechanism and safety assessment of the whole life performance of concrete structures, the stochastic analysis of engineering structure and the risk assessment.He has hosted 2 national natural science fund projects and more than 10 provincial-level scientific research projects, applied for 14 national invention patents, 7 computer software copyrights, and publish 1 academic monograph ,more than 110 academic papers published in well-known journals such as Engineering Structures, Journal of Engineering Mechanics-ASCE, Journal of Structural Engineering-ASCE, Chinese Science, Journal of Civil Engineering, Journal of Building Structures, Journal of Building Materials (There are more than 20 articles in SCI and more than 50 articles in EI). As the main technical staff, he edited 3 local standards and guidelines, and participated in the compilation of national industry standard 1.Load More
[Journal of Autonomous Intelligence] 10.32629/jai.v1i2.28Published on：2019-06-08 | Updated on：2019-06-15
Content: Parameter estimation is very important for identifying system static and dynamic models, because these essential parameters are known to be generally the certainty of the system's probability deterministic properties. The maximum likelihood method is based on probability statistics elements. These two lead to the formation of a maximum likelihood parameter estimation, which is considered to be one of the classical probability Bayesian methods. Maximum likelihood estimation has an attractive limiting nature of consistency, asymptotic normality and efficiency. It is widely used in aircraft dynamic parameter identification, leading to inertial instrument error coefficient estimation and traffic engineering flow monitoring.
So far, several optimization methods have been proposed to solve the maximum likelihood parameter estimation problem. It is roughly divided into three categories, conventional analytical methods, traditional numerical approximation and biological heuristic optimization methods.
Nowadays, the maximum likelihood parameter in a static system is usually estimated by traditional analytical methods. However, most of the problems estimated by the maximum likelihood parameter in dynamic systems are highly non-linear, and it is difficult to be solved by conventional analytical methods. Therefore, researchers tend to seek traditional numerical approximation techniques to overcome these difficulties.
Yongzhong Lu and his team made a study on the perspective of bioincentive optimization technology, result of which was published in the journal Autonomous Intelligence. This review attempts to provide a comprehensive view of traditional and bio-stimulus optimization techniques in maximum likelihood parameter estimation to highlight challenges and key issues and to promote further research. The focus of this paper is to study recently used traditional and bio-stimulus optimization techniques in maximum likelihood estimation.
Studies have shown that we need to further improve the traditional numerical approximation techniques in maximum likelihood parameter estimation, and we need to develop hybrid traditional and biological heuristic techniques to estimate the maximum likelihood parameters based on their merits.
[Journal of Autonomous Intelligence] Article: Recent Advances in Particle Swarm Optimization for Large Scale ProblemsPublished on：2019-06-08 | Updated on：2019-06-15
Content: With the advent of the current era of big data, the scale of real-world optimization problems with many decisive design variables has grown. So far, how to develop new optimization algorithms for these large-scale problems and how to extend the scalability of existing optimization algorithms has presented the further challenges in the field of Biological Heuristic Calculation. Therefore, solving these complex large-scale problems to produce truly useful results is one of the hottest topics at the moment. As a branch of swarm intelligence-based algorithms, Particle Swarm Optimization (PSO) and its wide variety of applications for large-scale problems have grown rapidly over the past decade.
Danping Yan and Yongzhong Lu conducted a study on the large-scale problem particle swarm optimization algorithm, which was published in the journal Autonomous Intelligence. This paper mainly introduces recent research results and trends, and highlights existing unresolved challenges and key issues with significant impact to encourage further research on large-scale PSO theory and its applications in the coming years.
There are still many unresolved problems in spite of the success of large-scale PSO in recent years, which are more likely to have a huge impact on further research progress . For example, the theoretical research of large-scale PSO still lags behind its application, the application of the distribution of particle swarm big data, and the theoretical analysis of the optimal grouping and its features in the covariation framework based on dimensionality reduction.
Click on the link to read the full article:Load More
[Journal of Autonomous Intelligence] Journal NewsPublished on：2019-05-15 | Updated on：2019-06-11
Mobile phone technology has changed the way humans understand and interact with the world. The latest technology - the fifth generation mobile standard (i.e 5G) is currently being deployed in some parts of the world. This raises an obvious problem. What factors will drive the development of the sixth generation of mobile technology? How will 6G be different from 5G, and what kind of interaction and activities will 5G not achieve?
Today, Razvan-Andrei Stoica and Giuseppe Abreu of the Jacobs University Bremen in Germany studied the technology of the 6G era, which identified the limitations of 5G and the factors they thought would drive the development of 6G. Their conclusion is that artificial intelligence will become the main driving force of mobile technology, and 6G will become the driving force of a new generation of machine intelligence applications.
What kind of transformative improvements can 6G provide? According to Stoica and Abreu, it will be able to quickly change cooperation between intelligent agents, solve complex challenges, and solve complex problem solutions. One obvious application is network optimization, but other applications include financial market monitoring and planning, healthcare optimization and “Nowcast” – the ability to predict and respond to events as they occur – previously unimaginable scale.
Artificial intelligence agents are clearly destined to play an important role in our future. "In order to take advantage of the true power of these agents, collaborative AI is the key," Stoica and Abreu said. "From the perspective of the nature of the 21st century mobile society, it is clear that this cooperation can only be achieved through wireless communication." If Stoica and Abreu are correct, then artificial intelligence will become the driving force for shaping the future communication network.
The Journal of Autonomous Intelligence is a peer-reviewed interdisciplinary journal covering original research and review articles on all aspects of autonomous intelligence, with a focus on the dynamics of artificial intelligence and robotics. The journal aims to disseminate research in the field of autonomous intelligence by demonstrating the latest advances in the field and the most advanced theories, technologies and implementations. It aims to bring scientific discourse and discovery to a wide range of international audiences by providing scientists, engineers, researchers and academics with a platform to share, discuss and advocate new issues and developments in multiple areas of autonomous intelligence.
The scope of journals includes the promotion of the relationship between power and information flux in systems with corresponding organizations, which is the dynamic frontier of theoretical and applied engineering sciences towards system autonomy. The journal covers articles, comments, opinions and lectures that help to promote the phenomenon of independent intelligence.,scholars in related fields are welcome to contribute actively and build a good communication platform.Load More
[Journal of Autonomous Intelligence] Article: Neural processor in artificial intelligence advancementPublished on：2019-05-09 | Updated on：2019-06-15
Content: A neural network is a set of algorithms that are modeled loosely after the human brain in recognizing patterns. They interpret sensory data through a kind of machine that perceive, marks or clusters the original input. The patterns they recognize are digital and are included in vectors, which contained all real-world data such as images, sounds, text or time series.
The neural network is a computational model based on the structure and function of the biological neural network. The information flowing through the network affects the structure of the neural network because neural network changes or learns, in a sense, based on input and output. Although the neural network is very complex, such as the weight change of each new data in the time range, an experimental model of the high-level architecture of the neural processor is proposed. The neural processor performs all the functions of a common neural network, such as adaptive learning, self-organization, real-time operation, and fault tolerance.
Indian system engineer Manu Mitra studied the neural processor problem in artificial intelligence advancement, which was published in the journal - Journal of Autonomous Intelligence. This paper discusses the analysis of neural processing and introduces it through experiments, including graphical representations of data analysis.Load More
[Journal of Autonomous Intelligence] Article: A comprehensive analysis of smart home energy management systemPublished on：2019-04-30 | Updated on：2019-06-15
Content:With the rapid demand of electric energy, and the increase of energy price, there is an increasing proportion in the electricity generation of renewable energy. Various environmental restrictions also limit the electricity generating from traditional energy sources. All of these challenges have prompted the power industry to shift its focus to smart demand side management technology.
The development of smart grid technology provides consumers with the opportunity to schedule their own energy use models. The main purpose of the entire exercise is to lower energy efficiency and reduce the peak-to-average ratio (PAR) of electricity. The two-way flow of information between power utilities and consumers in smart grids opens up new areas of application. The main component is an Energy Management Controller (EMC) that collects demand response (DR), ie real-time energy prices, from various devices through a home gateway (HG). By utilizing DR information, EMC has achieved the best mode of energy scheduling. This optimal energy plan is provided to various devices via HG. The rooftop photovoltaic system is used as a local power generation microgrid in the home and can be integrated into the national grid. Under this energy management plan, whenever solar power exceeds the energy needs of household appliances, it will provide additional power to the grid. Therefore, different devices in the consumer’s premises perform in the most efficient way.
Tesfahun Molla, Baseem Khan and Pawan Singh have analyzed the optimization technology of smart home energy management system , which was published in the journal Autonomous Intelligence. This analysis provides a comprehensive review of different smart home appliance optimization technologies based on mathematics and Heuristic technology. Analysis also shows that smart home energy management plays an important role in the smart grid environment. It handles demand response and management techniques. The main goal of the home energy management problem is to optimize the timetable of various household appliances to minimize overall energy consumption.Load More