• Current Issue
  • Online First
  • Archive
  • Most Downloaded
  • Special Issue
  • Special Column
    Select AllDeselectExport15
    Display Method:
    2024,16(1):1-10, DOI: 10.13878/j.cnki.jnuist.20221027003
    It has become a mandatory requirement for electric bike riders to wears helmet on riding.To automatically check if the electric bike rider wears helmet,a helmet and license plate detection approach based on improved YOLOv5m model is herein proposed,which can locate and recognize the license plate of the unhelmeted rider,so as to track down the violators.The model is trained with self-built dataset,uses DIOU loss function instead of GIOU loss function,and uses DIOU_NMS to replace weighted NMS so as to enhance the recognition ability for dense cycling scenes.Meanwhile,the ECA attention mechanism is added to the Backone and the Neck parts to improve the recognition accuracy for small- and medium-sized targets.Then,the K-means algorithm is used to re-cluster the anchor frame size.Finally,the Mosaic data enhancement method is improved.The experimental results show that the mAP of the proposed approach is 92.7%,which is 2.15,5.7,and 6.9 percentage points higher than the original YOLOv5m,YOLOv4 tiny,and Faster RCNN,respectively.It can be concluded that the improved YOLOv5m model can accurately recognize rider's helmet and electric bike's license plate.
    2024,16(1):11-19, DOI: 10.13878/j.cnki.jnuist.20230502002
    An algorithm based on improved YOLOv5s is proposed to address the problems of small percentage of traffic signs in the image,low detection accuracy and complex surrounding environment.First,the attention mechanism of ECA (Efficient Channel Attention) is added to the backbone network part to enhance the feature extraction ability of the network and effectively solve the problem of complex surrounding environment.Second,the HASPP (Hybrid Atrous Spatial Pyramid Pooling) is proposed,which enhances the network's ability to combine context.Finally,the neck structure in the network is modified to allow efficient fusion of high level features with underlying features while avoiding information loss across convolutional layers.Experimental results show that the improved algorithm achieves an average detection accuracy of 94.4%,a recall rate of 74.1% and an accuracy rate of 94.0% on the traffic signage dataset,which were 3.7,2.8,and 3.4 percentage points higher than the original algorithm,respectively.
    2024,16(1):20-29, DOI: 10.13878/j.cnki.jnuist.20230424004
    To address the problem that the railway safety zone division along the electrified railway with complex background needs to use actual fixed standard parts as reference and the division range is small,a smart safety zone division approach independent of reference objects is proposed.The GSD (Ground Sample Distance) parameters are calculated from relevant parameters in images collected by UAVs (Unmanned Aerial Vehicles),and the DeepLabv3+ model with ECA-Net module is used to accurately segment the railway in the image.Then,a series of image processing operations such as edge detection,opening operation,and probability Hough transform are used to extract the key pixel points that make up the railway,and the least squares algorithm is used to fit the railway and obtain its mathematical expression.Finally,mathematical models,GSD parameters,and the mathematical expression of the railway are combined to complete the safety zone division.Experimental results show that the proposed approach achieves measurement accuracy over 90%,doesn't need to select fixed reference objects,and has strong adaptability and high robustness.The high practicality and reliability of the proposed approach provides effective technical support for safety management along the electrified railway.
    The rapid development of network technology,human-computer interaction and artificial intelligence has given birth to the metaverse and further promoted the digital transformation of all aspects of people's life.The concept of metaverse emerged in 2021,and has attracted extensive attention from industry,academia,media and the public.This paper attempts to deeply analyze the metaverse from the perspective of technology and application.First,the concept and connotation of the metaverse are elaborated from the definition,origin & development,characteristics and key technologies (network and computing,Internet of Things,human-computer interaction,electronic game,block chain,digital twin,etc.).Then,the enterprises and application examples involved in the metaverse are discussed.Finally,the existing challenges and opportunities in the development of the metaverse are analyzed,and the future researches and applications are prospected.Through the meta-analysis of the current development status and research trend of the metaverse and the scientific evaluation of the potential application of the metaverse,we can provide useful reference for the researchers of the metaverse.
    2024,16(1):46-55, DOI: 10.13878/j.cnki.jnuist.20230424001
    To address the problem of numerous hyperparameters,loss of long time series information and difficulty in distinguishing primary and secondary features in Long Short-Term Memory network (LSTM) for grain yield capacity prediction,this paper proposes a combined data-driven grain capacity forecasting model.In the hyperparameter part,the proposed model performs hyperparameter search optimization for LSTM by introducing Dynamic Weights and Laplacian variation of Bald Eagle Search Optimization Algorithm (WLBES),to avoid the process of manual parameter adjustment.In the prediction part,the proposed model uses Ridge Regression (RR) to correct the residuals of the prediction results to make up for the deficiency of LSTM data loss,and adds an attention mechanism to distinguish primary and secondary features by weight size to enhance the importance of features with greater relevance to grain production.The results show that the combined WLBES-LSTM-RR model decreases the root mean square error (RMSE) by 75% and 19% compared with the LSTM and WLBES-LSTM models,respectively,and substantially decreases the RMSE compared with other combined models of optimized LSTM.This combined model has higher prediction accuracy in grain yield capacity prediction.
    2024,16(1):56-65, DOI: 10.13878/j.cnki.jnuist.20230421001
    Air pollution seriously endangers the travel safety and health of residents.As a comprehensive indicator used to measure air quality condition,Air Quality Index (AQI) can alert the public to air quality and enable people to make more informed travel decisions.By predicting the change of air quality in advance,the government and environmental protection departments can take emergency measures to reduce air pollution.Here,we propose an integrated deep learning model based on Convolutional Neural Network and Gated Recurrent Unit (CNN-GRU) for AQI prediction.The CNN is used to extract the spatial and temporal characteristics of air pollutants and AQI and complete the feature mapping,while the GRU to model the temporal relationship and complete the calculation and AQI efficiently.The daily average concentrations of six major air pollutants (PM2.5,PM10,SO2,CO,NO2,O3) in Beijing and Guangzhou during 2014-2022 are selected for example study,and the AQI is predicted using the CNN-GRU model.The results show that,compared with Multiverse-Optimized Generalized Regression Neural Network model (MVO-GRNN) and Genetic Algorithm-optimized BP neural network model (GA-BP),the proposed CNN-GRU model has the smallest prediction error for AQI.
    2024,16(1):66-75, DOI: 10.13878/j.cnki.jnuist.20220521001
    Indices such as tumor cell density,nucleocytoplasmic ratio,and average size have important implications for cancer grading and prognosis.Therefore,segmentation of nuclei is the fundamental prerequisite for tumor microenvironment analysis in computational pathology.Additionally,the exploration of new tumor markers is of great significance through statistical analysis of segmentation results.However,the morphology of nuclei in the background of pathological images is irregular,the staining of nuclei is uneven,and adhesion occurs between the edges of nuclei.While the segmentation error of the edge will make no difference on the overall loss as long as the main body of the nucl is correctly segmented,so the adhering nuclei can easily be regarded as the same segmentation target by existing deep learning algorithms.To address the overlapping nuclei,a new segmentation algorithm based on the Transformer and distance map,abbreviated as TDM-Net,is proposed,which integrates the core of multi-head self-attention mechanism in Transformer with contextual information to fully explore the proximity relationship and enhances the learning ability of image details by introducing distance map to emphasize the interior of nuclei and weaken the boundary of nuclei.The algorithm's Dice coefficient,precision,Aggregated Jaccard Index (AJI) and Hausdorff distance are 0.797 9,0.756 1,0.667 2,and 10.11,respectively.The results show that the proposed TDM-Net outperforms other segmentation algorithms,effectively improves nuclei segmentation accuracy and solves overlapping of different nuclei.
    2024,16(1):76-82, DOI: 10.13878/j.cnki.jnuist.20230526003
    Aiming at the problems of boring contents of traditional wrist rehabilitation training methods and low training efficiency due to users' lack of motivation to participate,a wrist training system of myoelectric control virtual reality game was designed.The surface electromyography (sEMG) signals of wrist movement were collected and the wrist joint movement intention was decoded through the principle of muscle synergy for the control of the virtual reality game;random disturbance force was introduced into the virtual reality game,and the interaction with the virtual reality environment was realized through the way of impedance control,which enabled users to explore different movement control methods.The feasibility of the system was verified through model calibration experiments,and training experiments were conducted to assess the training effect by evaluating the task completion time as well as the path efficiency.The experimental results show that introducing random interference force reduces the task completion time by 24% and improves the path efficiency by 26%,and the designed training system enables users to perform motion control in a more efficient way and improves the training efficiency.
    2024,16(1):83-96, DOI: 10.13878/j.cnki.jnuist.20230216003
    Pervasive edge computing allows peer devices to establish independent communication connections,which enables users to process massive computing tasks with low delay.However,distributed devices cannot obtain the global system status of the network in real time,thus the fairness of resource utilization cannot be guaranteed.To solve this problem,a resource allocation scheme for pervasive edge computing based on Generative Adversarial Network (GAN) is proposed.In this scheme,a multi-objective optimization problem is established for minimizing the time delay and energy consumption,which is then transformed into a maximum reward problem according to the random game theory.And then a computation offloading algorithm based on multi-agent imitation learning is proposed,which combines multi-agent Generative Adversarial Imitation Learning (GAIL) and Markov Decision Process (MDP) to approximate the performance of experts,and realizes online execution of the algorithm.Finally,combined with Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ),the time delay and energy consumption are jointly optimized.Simulation results show that,compared with other edge computing resource allocation schemes,the proposed solution shortened the time delay by 30.8% and reduced the energy consumption by 34.3%.
    2024,16(1):97-105, DOI: 10.13878/j.cnki.jnuist.20230407001
    Sonar images are prone to problems such as low contrast,low resolution,and edge distortion,so it is difficult to accurately separate effective signals from noise when removing noise from sonar images,resulting in reduced image contrast,unclear edge contours,and severe detail loss after denoising.Therefore,this paper proposes a sonar image denoising algorithm based on adaptive Wiener filtering and 2D-VMD (Two Dimensional Variational Mode Decomposition).First,a noisy image is decomposed using 2D-VMD to obtain a series of sub modes with different center frequencies.Effective modal components are obtained via correlation coefficients and structural similarity,then processed by adaptive Wiener filtering,and finally the filtered modal components are reconstructed to remove noise.The experimental results show that the proposed image denoising algorithm achieves the best results in terms of correlation coefficient and structural similarity,with a peak signal-to-noise ratio slightly lower than that of NSST domain denoising.Taking into account objective data and visual effects,the algorithm proposed in this paper achieves the best performance in image details and edge preservation after removing noise.
    2024,16(1):106-113, DOI: 10.13878/j.cnki.jnuist.20230809001
    To address the problems of high cost of time and labor and low efficiency frustrated traditional remote sensing image processing,an improved 2DCNN (2D Convolutional Neural Network) model abbreviated as En-De-2CP-2DCNN was proposed,with the purpose to improve the processing speed,accuracy and reduce the number of parameters in the classification of remote sensing Hyperspectral Images (HSI).First,1DCNN,2DCNN and 3DCNN were used to carry out classification experiments on Pavia University HSI dataset,and their advantages and disadvantages were compared and analyzed.Second,under the premise of maintaining fast processing speed without increasing model parameters,the 2DCNN was selected as the basic model,which was then improved with referring to the Encoder-Decoder structure of SegNet and integrating the idea of double convolutional pooling,and the learning strategy was optimized.The results show that the F1-score of the proposed En-De-2CP-2DCNN model is 99.96%,reaching the same level of 3DCNN (99.36%),which is 2.68 percentage points higher than that before improvement (97.28%);the processing speed (5 s/epoch) is comparable to that of 1DCNN and faster than 3DCNN (96 s/epoch);the amount of parameters is reduced from 3.55 MB to 2.01 MB,which is higher than 3DCNN (316 KB) but much lower than 1DCNN (19.21 MB).The proposed En-De-2CP-2DCNN model realizes accurate,fast and lightweight processing of remote sensing hyperspectral images.In particular,the improvement in processing speed and parameter amount is conducive to further realizing the lightweight deployment of mobile terminals.
    Select AllDeselectExport
    Display Method:

    Select AllDeselectExport
    Display Method:
    2014,6(5):405-419, DOI:
    [Abstract] (1840) [HTML] (0) [PDF 1.98 M] (22437)
    With the rapid development of internet of things,cloud computing,and mobile internet,the rise of Big Data has attracted more and more concern,which brings not only great benefits but also crucial challenges on how to manage and utilize Big Data better.This paper describes the main aspects of Big Data including definition,data sources,key technologies,data processing tools and applications,discusses the relationship between Big Data and cloud computing,internet of things and mobile internet technology.Furthermore,the paper analyzes the core technologies of Big Data,Big Data solutions from industrial circles,and discusses the application of Big Data.Finally,the general development trend on Big Data is summarized.The review on Big Data is helpful to understand the current development status of Big Data,and provides references to scientifically utilize key technologies of Big Data.
    2009(1):1-15, DOI:
    [Abstract] (2334) [HTML] (0) [PDF 1.11 M] (14055)
    2013,5(5):385-396, DOI:
    [Abstract] (1647) [HTML] (0) [PDF 1.40 M] (8527)
    Recently,coordinated control of multi-agent systems has been a hot topic in the control field,due to its wide application in cooperative control of multiple autonomous vehicles,traffic control of vehicles,formation control of unmanned aircrafts,resource allocation in networks and so on.Firstly,the introduction of background about multi-agent systems,the concepts of agents and the knowledge of the graph theory has been given.And then research status of swarming/flocking problems,formation control problems,consensus problems and network optimization are summarized and analyzed at home and abroad,including coordination control of multi-agent systems.Finally,some problems about multi-agent systems to be solved in future are proposed,in order to urge deep study on the theory and application in coordinated control of multi-agent systems.
    2010(5):410-413, DOI:
    [Abstract] (2384) [HTML] (0) [PDF 960.26 K] (8321)
    2017,9(2):159-167, DOI: 10.13878/j.cnki.jnuist.2017.02.006
    [Abstract] (1221) [HTML] (0) [PDF 1.56 M] (6726)
    Various indoor positioning techniques have been developed and widely applied in both manufacturing processes and people's lives.Due to the electromagnetic interference and multipath effects,traditional Wi-Fi,Bluetooth and other wireless locating technologies are difficult to achieve high accuracy.Modulated white LED can provide both illumination and location information to achieve highly accurate indoor positioning.In this paper,we first introduce several modulation methods of visible light positioning systems and compare the characteristics of different modulation methods.Then,we propose a viable indoor positioning scheme based on visible light communications and discuss two different demodulation methods.In the following,we introduce several positioning algorithms used in visible light communication system.Finally,the problems and prospects of the visible light communication based indoor positioning are discussed.
    2012,4(4):351-361, DOI:
    [Abstract] (1630) [HTML] (0) [PDF 1.22 M] (6584)
    In recent years,cloud computing as a new computing service model has become a research hotspot in computer science.This paper is to give a brief analysis and survey on the current cloud computing systems from the definition,deployment model,characteristics and key technologies.Then,the major international and domestic research enterprises and application products on cloud computing are compared and analyzed.Finally,the challenges and opportunities in current research of cloud computing are discussed,and the future directions are pointed out.So,it will help to provide a scientific analysis and references for use and operation of cloud computing.
    2011(1):1-22, DOI:
    [Abstract] (1926) [HTML] (0) [PDF 1.29 M] (6353)
    System identification is the theory and methods of establishing mathematical models of systems.The mathematical modeling has a long research history,but the system identification discipline has only several tens of years.In this short decades,system identification has achieved great developments,new identification methods are born one after another,and the research results cover the theory and applications of natural science and social sciences,including physics,biology,earth science,meteorology,computer science,economics,psychology,political science and so on.In this context,we come back to ponder some basic problems of system identification,which is not without benefits for the development of system identification.This is a paper of an introduction to system identification which briefly introduces the definition of identification,system models and identification models,the basic steps and purposes of identification,including the experimental design of identification and data preprocessing,and the types of identification methods,including the least squares identification methods,gradient identification methods,auxiliary model based identification methods,and multi innovation identification methods,and hierarchical identification methods,etc
    2017,9(2):174-178, DOI: 10.13878/j.cnki.jnuist.2017.02.008
    With the deepening study of nonlinear effect in optical fiber,the distributed optical fiber sensor has been widely studied and applied.In this paper,the application of optical fiber sensor is introduced.To realize different types of fiber distributed sensing,the principle of three kinds of scattered light based on Brillouin scattering,Raman scattering,and Rayleigh scattering is summarized.Finally,the future development direction of fiber distributed sensing is prospected.
    2014,6(5):426-430, DOI:
    [Abstract] (1637) [HTML] (0) [PDF 1.04 M] (4912)
    We propose a scheme to produce continuous-variable(CV) pair-entanglement frequency comb by nondegenerate optical parametric down-conversion in an optical oscillator cavity in which a multichannel variational period poled LiTaO3 locates as a gain crystal.Using the CV entanglement criteria,we prove that every pair generated from the corresponding channel is entangled.The characteristics of signal and idler entanglement are discussed.The CV pair-entanglement frequency comb may be very significant for the application in quantum communication and computation networks.
    2013,5(6):544-547, DOI:
    [Abstract] (990) [HTML] (0) [PDF 1.56 M] (4909)
    On account of the power quality signal under stable state,this paper integrates the function of Hanning window with Fast Fourier Transform(FFT),and uses it to harmonic analysis for power quality.Matlab simulation is carried out for the feasibility of the proposed windowed FFT method,and results show that the integration of Hanning window function with FFT can significantly reduce the harmonic leakage,effectively weaken the interference between the harmonics,and accurately measure the amplitude and phase of power signal.
    2014,6(3):226-230, DOI:
    [Abstract] (886) [HTML] (0) [PDF 1.33 M] (4817)
    As a modulation with relatively strong anti-interference capacity,quadrature phase shift keying(QPSK) has been extensively used in wireless satellite communication.This paper describes the Matlab simulation of QPSK demodulation,and designs an all-digital QPSK demodulation with FPGA.The core of demodulation is synchronization,which includes carrier synchronization and signal synchronization.The carrier synchronization is completed through numerical Costas loop,while signal synchronization through modulus square spectrum analysis,and the results are simulated on Matlab.The communication functions are implemented by upgradable or substitutable softwares as many as possible,based on the idea of software radio communication.The parameter values through Matlab simulation,combined with appropriate hardware system,technically realize the design of the proposed all-digital meteorological satellite demodulator based on FPGA.
    2017,9(6):575-582, DOI: 10.13878/j.cnki.jnuist.2017.06.002
    [Abstract] (1376) [HTML] (0) [PDF 1.18 M] (4358)
    Knowledge graph technology is widely concerned and studied during recent years,in this paper we introduce the construction methods,recent development of knowledge graph in details,we also summarize the interdisciplinary applications of knowledge graph and future directions of research.This paper details the key technologies of textual,visual and multi-modal knowledge graph,such as information extraction,knowledge fusion and knowledge representation.As an important part of the knowledge engineering,knowledge graph,especially the development of multi-modal knowledge graph,is of great significance for efficient knowledge management,knowledge acquisition and knowledge sharing in the era of big data.
    2013,5(5):414-420, DOI:
    [Abstract] (1204) [HTML] (0) [PDF 1.04 M] (4277)
    With the continuous increase of road vehicles,occasional congestion caused by traffic accidents seriously affect the commuting efficiency of traveler and the overall operation level of road network.Real-time and exact forecasting of short-term traffic flow volume is the key point to intelligent traffic system and precondition to solve the congestion situation by route guidance and clearing.According to the uncertain and non-linear features of traffic volume,a model integrated of the improved BP neural network and autoregressive integrated moving average (ARIMA) model is established to forecast the short-term traffic flow.The case application result shows that the combined model has an advantage over the single models in forecasting performance and forecasting accuracy.
    2014,6(6):515-519, DOI:
    This paper proposes a two-step detection scheme that begins thick and ends thin,to mine the outliers of multivariable time series (MTS).According to the confidence interval of the data in sliding window,characteristics of both variation trend value and relevant variation trend value were constructed,which were then used in the two detection processes.Meanwhile,the rapid extraction algorithm for characteristics is studied.The outlier detection scheme is then applied to mine outliers before and after an accident happened at a 110 kV Grid Transformer Substation in Jiangsu province.Data sets of various equipment tables,which were collected by OPEN3000 data surveillance system,were checked by the proposed detection scheme,and experiment result indicates that this algorithm can rapidly and precisely locate the outliers.
    2015,7(1):86-91, DOI:
    [Abstract] (901) [HTML] (0) [PDF 4.24 M] (3660)
    Based upon GDAS and GBL NCEP reanalysis data with resolution 1°×1°and 2.5°×2.5°respectively,the trajectory of the air mass at 100 m altitude over Hetian meteorological station is simulated by HYSPLIT (Hybrid Single Particle Lagrangian Integrated Trajectory Model),which is developed by Air Source Laboratory of NOAA,to estimate the effect of integration error and resolution error on the trajectory calculation error.The contribution of the integration error is found to be very small,which increases slightly with the integration time length and has no relation to the resolution of the meteorological data.The resolution error varies at different time point,and is found to be related to the topography,the weather system and the interpolation.The simulated trajectories using datasets with different resolution differed with each other significantly,indicating that the resolution error contributes more to the trajectory calculation error than calculation error.
    2011(1):23-27, DOI:
    [Abstract] (3775) [HTML] (0) [PDF 1.11 M] (3480)
    In order to solve the sudoku more efficiently,a novel approach was proposed.We employed the real number coding to get rid of the integer constraint,meanwhile used the L0 norm to guarantee the sparsity of the solution.Moreover,the L1 norm was used to approximate the L0 norm on the basis of RIP and KGG condition.Finally,the slack vectors were introduced to transfer the L1 norm into a convex linear programming problem,which was solved by the primal dual interior point method.Experiments demonstrate that this algorithm reach 100% success rate on easy,medium,difficult,and evil levels,and reach 864% success rate on only 17 clue sudokus.Besides,the average computation time is quite short,and has nothing to do with the difficulty of sudoku itself.In all,this algorithm is superior to both constraint programming and Sinkhorn algorithm in terms of success rate and computation time.


    Authors CornerMore+


      • Search by:
      • Search term:
      • from to

      External Links

      Address:No. 219, Ningliu Road, Nanjing, Jiangsu Province