• Current Issue
  • Online First
  • Archive
  • Most Downloaded
  • Special Issue
  • Special Column
    Select AllDeselectExport14
    Display Method:
    2024,16(6):737-750, DOI: 10.13878/j.cnki.jnuist.20240125002
    Abstract:
    To address the challenges in small object detection tasks,such as the small size of target images,blurred target features,and difficulty in distinguishing targets from backgrounds,a method based on dual-stream contrastive feature learning and multi-scale image degradation augmentation is proposed.First,the input images of the contrastive learning model are subjected to multi-scale degradation augmentation,thus enhancing the model's ability to perceive and capture small targets.Second,contrastive learning representations are conducted in both spatial and frequency domains simultaneously to learn more discriminative target recognition features,thereby improving the model's ability to differentiate between targets and backgrounds.To verify the effectiveness of the proposed scheme,ablation experiments are designed,and the detection performance is compared with that of other advanced algorithms.Experimental results show that the proposed scheme achieves an improvement of 3.6% in mean Average Precision (mAP) over the baseline algorithm on the MS COCO dataset,and an improvement of 7.7% in mAP for small objects (mAPS) compared to mainstream advanced algorithms.On the VisDrone2019 dataset,the proposed method achieves a 2.4% increase in mAP compared to the baseline algorithm,demonstrating its superior overall performance over the baseline algorithm and other mainstream advanced algorithms.Visual analysis of detection results indicates a significant improvement in the rates of false negatives and false positives for small object detection.
    2024,16(6):751-759, DOI: 10.13878/j.cnki.jnuist.20240118001
    Abstract:
    To address the issues of blurry texture of the repaired images and instable training process in existing image inpainting algorithms,this paper proposes a Generative Adversarial Network (GAN) based image inpainting approach leveraging the diffusion process.By incorporating the diffusion model into a dual-discriminator GAN,the generated images from the generator and real images undergo a forward diffusion process to obtain the inverted images and real images with Gaussian noise.These images are then fed into the discriminator to enhance the inpainting quality and improve the model training stability.Style loss and perceptual loss are introduced into the loss function to learn semantic feature differences,eliminate motion blur,and preserve more details and edge information in the inpainting results.Qualitative and quantitative analyses,along with ablation experiments,have been conducted on the datasets of CelebA and Places2.The evaluation and restoration outcomes show the superior performance of the proposed approach.Compared with current inpainting methods,the proposed approach achieves an average improvement of 1.26 dB in Peak Signal-To-Noise Ratio (PSNR) and 1.84% in Structural Similarity Index Measure (SSIM),while reducing the L1 error by an average of 25.7%.Furthermore,the changes in the loss function indicate that the image inpainting algorithm with diffusion process exhibits more stable training behavior.
    2024,16(6):760-770, DOI: 10.13878/j.cnki.jnuist.20240321001
    Abstract:
    Deep Neural Networks (DNNs) exhibit vulnerability to specially designed adversarial examples and are prone to deception.Although current detection techniques can identify some malicious inputs,their protective capabilities remain insufficient when confronted with complex attacks.This paper proposes a novel unsupervised adversarial example detection method based on unlabeled data.The core idea is to transform the adversarial example detection problem into an anomaly detection problem through feature construction and fusion.To this end,five core components are designed,including image transformation,neural network classifier,heatmap generation,distance calculation,and anomaly detector.Firstly,the original images are transformed,and the images before and after the transformation are input into the neural network classifier.The prediction probability array and convolutional layer features are extracted to generate a heatmap.The detector is extended from focusing solely on the model's output layer to the input layer features,enhancing its ability to model and measure the disparities between adversarial and normal samples.Subsequently,the KL divergence of the probability arrays and the change distance of the heatmap focus points of the images before and after the transformation are calculated,and the distance features are then input into the anomaly detector to determine whether the example is adversarial.Experiments on the large-scale,high-quality image dataset ImageNet show that our detector achieves an average AUC (Area Under the ROC Curve) value of 0.77 against five different types of attacks,demonstrating robust detection performance.Compared with other cutting-edge unsupervised adversarial example detectors,our detector has a drastically enhanced TPR (True Positive Rate) while maintaining a comparable false alarm rate,indicating its significant advantage in detection capability.
    2024,16(6):771-781, DOI: 10.13878/j.cnki.jnuist.20230211001
    Abstract:
    With the continuous intensification of global climate change and the rapid urbanization,urban waterlogging disasters caused by extreme rainfall events have become increasingly severe,posing a serious challenge for many cities around the world.Here,we propose a deep learning approach to predict urban waterlogging depth,which is based on Long Short-Term Memory (LSTM) and rainfall data from May to August 2021 measured by 75 national automatic meteorological observation stations in Zhejiang's Zhuji city and the water depth data of a typical waterlogging site.The relationship between rainfall and waterlogging depth constructed by LSTM provides the next 2-hour urban waterlogging depth forecast with an interval of 15 minutes.When compared with Random Forest (RF) and Artificial Neural Network (ANN) models,the proposed LSTM approach,using water depth and precipitation data over the past 4 hours to predict the next 2-hour waterlogging depth,demonstrates the best performance by lower root mean square error (<5.6 cm),higher correlation coefficient (>0.93) and Nash-Sutcliffe efficiency coefficient (>0.86).It can be concluded that the proposed deep learning approach is feasible and applicable for urban waterlogging depth prediction.
    2024,16(6):782-790, DOI: 10.13878/j.cnki.jnuist.20230905003
    Abstract:
    Aiming at the distributed permutation flowshop scheduling problem with variable processing speed,a dual-population algorithm is proposed to optimize the makespan and the total energy consumption of the machine.First,an initialization method that mixes four strategies is used to generate a high-quality initial population.Second,specific evolution methods are designed according to the characteristics of the two populations,and the dynamic guide factor is introduced to adjust the evolution mode of the populations.Meanwhile,an energy-saving strategy for speed regulation is proposed to further optimize energy consumption.Finally,a dynamic population strategy is proposed to balance the resources of the two populations.Simulation results verify the effectiveness of each strategy,and show that the proposed dual population algorithm outperforms current multi-objective evolutionary algorithms.
    2024,16(6):791-800, DOI: 10.13878/j.cnki.jnuist.20231108002
    Abstract:
    Path tracking is essential for unmanned driving.This article presents the design of a path tracking system for unmanned trucks,aiming to enhance accuracy and stability across various speeds.The system employs a Linear Quadratic Regulator (LQR) optimized through an improved Genetic Algorithm (GA).First,a two-degree-of-freedom dynamic model and a tracking error model of the vehicle are established based on natural coordinate system.Subsequently,an LQR controller is designed to eliminate steady-state errors and enhance tracking accuracy through feedforward control.Second,the genetic algorithm is enhanced to optimize the weight matrix of the LQR controller,resulting in improved accuracy and stability for path tracking.Finally,the control effectiveness of the designed LQR controller is simulated and verified across a range of operating conditions using the joint simulation platform of Matlab/Simulink and TruckSim.The results show that the GA-optimized LQR (Linear Quadratic Regulator) controller improves the tracking accuracy by about 68.5% and 49.4% at speeds of 30 km/h and 60 km/h,respectively,under the double lane change scenario;while under the U-turn scenario,the tracking accuracy is enhanced by approximately 12.0% and 25.5%,respectively.Specifically,it demonstrates higher stability,with position and heading errors controllable within 0.17 m and 0.11 rad,respectively,thereby validating the efficacy of the proposed tracking control scheme.
    2024,16(6):801-809, DOI: 10.13878/j.cnki.jnuist.20240104002
    Abstract:
    To tackle the complexity and nonlinearity inherent in natural gas load sequences,this paper proposes a combined forecasting model that integrates Time2Vec,LSTM (Long Short-Term Memory),TCN (Temporal Convolutional Network),and attention mechanism.Initially,the Pearson correlation coefficient is used to conduct the correlation analysis to extract the meteorological features that exhibit strong relevance.Subsequently,the time vector embedding layer of Time2vec is introduced to convert the time series data into a continuous vector space,thus enhancing the model's computational efficiency in processing time series information.Then the temporal features extracted by Time2Vec,alongside the meteorological features selected using Pearson correlation coefficient,are fed into both the LSTM and TCN models for prediction,exploiting the long-term memory capability of LSTM and the local feature extraction capability of TCN.Finally,these two models are combined through attention mechanism,and assigned different weights according to the importance of the two to obtain the final prediction results.The experimental results show that the proposed Time2Vec-LSTM-TCN-Attention model outperforms other combined models in terms of adaptability and accuracy for natural gas load forecasting.
    2024,16(6):810-816, DOI: 10.13878/j.cnki.jnuist.20231227003
    Abstract:
    The performance of power equipment fault detection models is affected by various factors including fault type,fault complexity,and image quality.Here,a fault detection model based on TrellisNet and attention mechanism is proposed for power equipment.First,Long Short-Term Memory (LSTM) is integrated with Convolutional Neural Network (CNN) to construct LSTM-CNN to obtain fault characteristics in images,which can effectively distinguish features of different fault types and reduce the influence of noise and other interference factors.In addition,the feature data obtained by LSTM-CNN are used as input,and by embedding the attention mechanism into TrellisNet,an AT-TrellisNet network with high resolution is constructed to detect the fault type of different power equipment.Finally,five common power equipment faults are selected for model validation.The experiment results show that compared with some existing detection models,the proposed model has higher detection accuracy,with a maximum of over 90%,which can meet the actual needs of power equipment fault detection.
    2024,16(6):817-826, DOI: 10.13878/j.cnki.jnuist.20220627002
    Abstract:
    To address the increased load peak-to-trough ratio and user costs caused by disorderly charging and discharging of electric vehicle charging piles in residential communities,an optimized operation strategy is proposed for energy storage charging piles to achieve orderly charging and discharging.While reducing the peak-to-trough ratio,the strategy aims to minimize users' charging costs and maximize charging pile profits.A typical day is selected to establish an optimized charging and discharging scheduling model for the energy storage charging piles,which is solved by an Improved Multi-Objective Particle Swarm Optimization (IMOPSO) algorithm,and the charging and discharging power and time of the energy storage charging pile is adjusted in combination with the time-of-use electricity price.The MOPSO algorithm is improved by optimizing the inertial weights,learning factors and adaptively changing the position splitting factor.Experimental data results show that the algorithm can effectively improve the convergence speed,avoid falling into local optimum,and better handle multi-objective problem.In the energy storage scheduling model,it reduces the typical load peak-to-trough ratio by 55%,optimized by 36% compared to the original algorithm,rationally allocates charging piles to store power resources during low-demand period,effectively reduces charging costs by 20% to 30%,and improves charging pile profits,thus achieving a win-win situation for the power grid,users and charging piles.
    2024,16(6):827-837, DOI: 10.13878/j.cnki.jnuist.20240311001
    Abstract:
    The Modular Multilevel Converter-based High Voltage Direct Current (MMC-HVDC) overhead line transmission scheme is susceptible to instantaneous DC faults,and utilizing the Energy Storage Units (ESUs) installed within each wind turbine to absorb unbalanced power during faults is an effective solution.However,existing literature often considers the wind farm as a single Wind Generator (WG),neglecting the differences in residual capacities among individual ESUs.This approach easily leads to overloading of ESUs with smaller residual capacities,while those with larger residual capacity still have unutilized reserve capacities,resulting in power imbalance during faults.To address these issues,this paper proposes a coordinated control strategy for DC fault ride-through based on optimized control of ESUs within WGs.The strategy adopts the variance of the State of Charge (SOC) as an indicator to quantitatively describe the differences in residual capacities of ESUs,and takes the maximum decline rate of SOC variance as the objective function.The residual unbalanced power after the conversion of non-fault pole converter station is optimally allocated to the ESUs within individual WGs,so as to reduce the differences of residual capacities while ensuring the power balance of the system during faults.A model is developed on the PSCAD/EMTDC simulation platform to compare the proposed optimized power allocation scheme with the traditional average allocation scheme.The results show that the optimized allocation scheme fully utilizes the power absorption capacities of ESUs,thereby improving the DC fault ride-through capability of the system.
    2024,16(6):838-845, DOI: 10.13878/j.cnki.jnuist.20240311003
    Abstract:
    The new power system requires cooperative and optimized scheduling in the integration of Generation-Grid-Load-Storage (GGLS).At present,the scheduling automation system employs a relational database that relies on multiple associated tables for data query and storage,making it difficult to meet the rapid computation demands.Here,we propose a graph-based computing method for intra-day scheduling of coordinated GGLS.Firstly,the graph database is used to integrate the spatiotemporal data from power generation,grid,loads,and storage.Secondly,a comprehensive optimization model of intra-day scheduling of coordinated GGLS is formulated,taking into account various resources such as thermal power units,adjustable loads,and energy storage.Thirdly,a graph-based power flow calculation approach is proposed to quickly perform system security checks.Finally,based on the security check results,the system operating status is corrected until all operational constraints are satisfied.Through analysis of improved examples on the IEEE118-node and IEEE1354-node systems,it is verified that the proposed coordinated optimization strategy for GGLS can improve computational efficiency.
    Select AllDeselectExport
    Display Method:

    Select AllDeselectExport
    Display Method:
    2014,6(5):405-419, DOI:
    [Abstract] (2073) [HTML] (0) [PDF 1.98 M] (26393)
    Abstract:
    With the rapid development of internet of things,cloud computing,and mobile internet,the rise of Big Data has attracted more and more concern,which brings not only great benefits but also crucial challenges on how to manage and utilize Big Data better.This paper describes the main aspects of Big Data including definition,data sources,key technologies,data processing tools and applications,discusses the relationship between Big Data and cloud computing,internet of things and mobile internet technology.Furthermore,the paper analyzes the core technologies of Big Data,Big Data solutions from industrial circles,and discusses the application of Big Data.Finally,the general development trend on Big Data is summarized.The review on Big Data is helpful to understand the current development status of Big Data,and provides references to scientifically utilize key technologies of Big Data.
    2009(1):1-15, DOI:
    [Abstract] (2591) [HTML] (0) [PDF 1.11 M] (18329)
    Abstract:
    根据个人学习研究稳定性的心得体会,首先介绍了前苏联伟大的数学力学家Lyapunov院士的博士论文《运动稳定性的一般问题》在全世界产生的超过1个世纪的巨大影响.叙述了由该博士论文首创的几个巨大成就何以能奠定1门学科的基础,从而开创了1个新的重要的研究方向,以及留给后人很多很多研究的课题的理由.特别地,用事实和科学断语回答了“Lyapunov稳定性已领风骚100多年,余晖还几何”的问题.明确表明1个观点:稳定性将是1个“永恒的主题”,不老的科学,定将永恒地给人启迪,洞察力,智慧和思想.
    2011(1):1-22, DOI:
    [Abstract] (2111) [HTML] (0) [PDF 1.29 M] (13968)
    Abstract:
    System identification is the theory and methods of establishing mathematical models of systems.The mathematical modeling has a long research history,but the system identification discipline has only several tens of years.In this short decades,system identification has achieved great developments,new identification methods are born one after another,and the research results cover the theory and applications of natural science and social sciences,including physics,biology,earth science,meteorology,computer science,economics,psychology,political science and so on.In this context,we come back to ponder some basic problems of system identification,which is not without benefits for the development of system identification.This is a paper of an introduction to system identification which briefly introduces the definition of identification,system models and identification models,the basic steps and purposes of identification,including the experimental design of identification and data preprocessing,and the types of identification methods,including the least squares identification methods,gradient identification methods,auxiliary model based identification methods,and multi innovation identification methods,and hierarchical identification methods,etc
    2013,5(5):385-396, DOI:
    [Abstract] (1981) [HTML] (0) [PDF 1.40 M] (11729)
    Abstract:
    Recently,coordinated control of multi-agent systems has been a hot topic in the control field,due to its wide application in cooperative control of multiple autonomous vehicles,traffic control of vehicles,formation control of unmanned aircrafts,resource allocation in networks and so on.Firstly,the introduction of background about multi-agent systems,the concepts of agents and the knowledge of the graph theory has been given.And then research status of swarming/flocking problems,formation control problems,consensus problems and network optimization are summarized and analyzed at home and abroad,including coordination control of multi-agent systems.Finally,some problems about multi-agent systems to be solved in future are proposed,in order to urge deep study on the theory and application in coordinated control of multi-agent systems.
    2010(5):410-413, DOI:
    [Abstract] (2550) [HTML] (0) [PDF 960.26 K] (11375)
    Abstract:
    设计了一个三维声源定位系统,提出了一个新的系统模型,并对传统的基于声波到达时间差TDOA的算法进行了优化。通过检测麦克风接收到信号的时间差,结合已知的阵列元的空间位置确定声源的位置。该系统声源采集部分由4个阵列成正四面体的麦克风组成。算法的硬件实现由TMS320C5416DSP芯片完成'整个系统实现了声源定位的功能。
    2012,4(4):351-361, DOI:
    [Abstract] (1856) [HTML] (0) [PDF 1.22 M] (9747)
    Abstract:
    In recent years,cloud computing as a new computing service model has become a research hotspot in computer science.This paper is to give a brief analysis and survey on the current cloud computing systems from the definition,deployment model,characteristics and key technologies.Then,the major international and domestic research enterprises and application products on cloud computing are compared and analyzed.Finally,the challenges and opportunities in current research of cloud computing are discussed,and the future directions are pointed out.So,it will help to provide a scientific analysis and references for use and operation of cloud computing.
    2017,9(2):159-167, DOI: 10.13878/j.cnki.jnuist.2017.02.006
    [Abstract] (1431) [HTML] (0) [PDF 1.56 M] (9276)
    Abstract:
    Various indoor positioning techniques have been developed and widely applied in both manufacturing processes and people's lives.Due to the electromagnetic interference and multipath effects,traditional Wi-Fi,Bluetooth and other wireless locating technologies are difficult to achieve high accuracy.Modulated white LED can provide both illumination and location information to achieve highly accurate indoor positioning.In this paper,we first introduce several modulation methods of visible light positioning systems and compare the characteristics of different modulation methods.Then,we propose a viable indoor positioning scheme based on visible light communications and discuss two different demodulation methods.In the following,we introduce several positioning algorithms used in visible light communication system.Finally,the problems and prospects of the visible light communication based indoor positioning are discussed.
    2017,9(2):174-178, DOI: 10.13878/j.cnki.jnuist.2017.02.008
    [Abstract] (1136) [HTML] (0) [PDF 830.02 K] (8018)
    Abstract:
    With the deepening study of nonlinear effect in optical fiber,the distributed optical fiber sensor has been widely studied and applied.In this paper,the application of optical fiber sensor is introduced.To realize different types of fiber distributed sensing,the principle of three kinds of scattered light based on Brillouin scattering,Raman scattering,and Rayleigh scattering is summarized.Finally,the future development direction of fiber distributed sensing is prospected.
    2014,6(5):426-430, DOI:
    [Abstract] (1700) [HTML] (0) [PDF 1.04 M] (7276)
    Abstract:
    We propose a scheme to produce continuous-variable(CV) pair-entanglement frequency comb by nondegenerate optical parametric down-conversion in an optical oscillator cavity in which a multichannel variational period poled LiTaO3 locates as a gain crystal.Using the CV entanglement criteria,we prove that every pair generated from the corresponding channel is entangled.The characteristics of signal and idler entanglement are discussed.The CV pair-entanglement frequency comb may be very significant for the application in quantum communication and computation networks.
    2013,5(6):544-547, DOI:
    [Abstract] (1043) [HTML] (0) [PDF 1.56 M] (7210)
    Abstract:
    On account of the power quality signal under stable state,this paper integrates the function of Hanning window with Fast Fourier Transform(FFT),and uses it to harmonic analysis for power quality.Matlab simulation is carried out for the feasibility of the proposed windowed FFT method,and results show that the integration of Hanning window function with FFT can significantly reduce the harmonic leakage,effectively weaken the interference between the harmonics,and accurately measure the amplitude and phase of power signal.
    2014,6(3):226-230, DOI:
    [Abstract] (907) [HTML] (0) [PDF 1.33 M] (7052)
    Abstract:
    As a modulation with relatively strong anti-interference capacity,quadrature phase shift keying(QPSK) has been extensively used in wireless satellite communication.This paper describes the Matlab simulation of QPSK demodulation,and designs an all-digital QPSK demodulation with FPGA.The core of demodulation is synchronization,which includes carrier synchronization and signal synchronization.The carrier synchronization is completed through numerical Costas loop,while signal synchronization through modulus square spectrum analysis,and the results are simulated on Matlab.The communication functions are implemented by upgradable or substitutable softwares as many as possible,based on the idea of software radio communication.The parameter values through Matlab simulation,combined with appropriate hardware system,technically realize the design of the proposed all-digital meteorological satellite demodulator based on FPGA.
    2014,6(6):515-519, DOI:
    Abstract:
    This paper proposes a two-step detection scheme that begins thick and ends thin,to mine the outliers of multivariable time series (MTS).According to the confidence interval of the data in sliding window,characteristics of both variation trend value and relevant variation trend value were constructed,which were then used in the two detection processes.Meanwhile,the rapid extraction algorithm for characteristics is studied.The outlier detection scheme is then applied to mine outliers before and after an accident happened at a 110 kV Grid Transformer Substation in Jiangsu province.Data sets of various equipment tables,which were collected by OPEN3000 data surveillance system,were checked by the proposed detection scheme,and experiment result indicates that this algorithm can rapidly and precisely locate the outliers.
    2017,9(6):575-582, DOI: 10.13878/j.cnki.jnuist.2017.06.002
    [Abstract] (1508) [HTML] (0) [PDF 1.18 M] (6730)
    Abstract:
    Knowledge graph technology is widely concerned and studied during recent years,in this paper we introduce the construction methods,recent development of knowledge graph in details,we also summarize the interdisciplinary applications of knowledge graph and future directions of research.This paper details the key technologies of textual,visual and multi-modal knowledge graph,such as information extraction,knowledge fusion and knowledge representation.As an important part of the knowledge engineering,knowledge graph,especially the development of multi-modal knowledge graph,is of great significance for efficient knowledge management,knowledge acquisition and knowledge sharing in the era of big data.
    2011(1):23-27, DOI:
    [Abstract] (3935) [HTML] (0) [PDF 1.11 M] (6549)
    Abstract:
    In order to solve the sudoku more efficiently,a novel approach was proposed.We employed the real number coding to get rid of the integer constraint,meanwhile used the L0 norm to guarantee the sparsity of the solution.Moreover,the L1 norm was used to approximate the L0 norm on the basis of RIP and KGG condition.Finally,the slack vectors were introduced to transfer the L1 norm into a convex linear programming problem,which was solved by the primal dual interior point method.Experiments demonstrate that this algorithm reach 100% success rate on easy,medium,difficult,and evil levels,and reach 864% success rate on only 17 clue sudokus.Besides,the average computation time is quite short,and has nothing to do with the difficulty of sudoku itself.In all,this algorithm is superior to both constraint programming and Sinkhorn algorithm in terms of success rate and computation time.
    2013,5(5):414-420, DOI:
    [Abstract] (1359) [HTML] (0) [PDF 1.04 M] (6448)
    Abstract:
    With the continuous increase of road vehicles,occasional congestion caused by traffic accidents seriously affect the commuting efficiency of traveler and the overall operation level of road network.Real-time and exact forecasting of short-term traffic flow volume is the key point to intelligent traffic system and precondition to solve the congestion situation by route guidance and clearing.According to the uncertain and non-linear features of traffic volume,a model integrated of the improved BP neural network and autoregressive integrated moving average (ARIMA) model is established to forecast the short-term traffic flow.The case application result shows that the combined model has an advantage over the single models in forecasting performance and forecasting accuracy.
    2015,7(1):86-91, DOI:
    [Abstract] (972) [HTML] (0) [PDF 4.24 M] (5962)
    Abstract:
    Based upon GDAS and GBL NCEP reanalysis data with resolution 1°×1°and 2.5°×2.5°respectively,the trajectory of the air mass at 100 m altitude over Hetian meteorological station is simulated by HYSPLIT (Hybrid Single Particle Lagrangian Integrated Trajectory Model),which is developed by Air Source Laboratory of NOAA,to estimate the effect of integration error and resolution error on the trajectory calculation error.The contribution of the integration error is found to be very small,which increases slightly with the integration time length and has no relation to the resolution of the meteorological data.The resolution error varies at different time point,and is found to be related to the topography,the weather system and the interpolation.The simulated trajectories using datasets with different resolution differed with each other significantly,indicating that the resolution error contributes more to the trajectory calculation error than calculation error.

DownloadsMore+

    Authors CornerMore+

      Search

      • Search by:
      • Search term:
      • from to

      External Links

      Address:No. 219, Ningliu Road, Nanjing, Jiangsu Province

      Postcode:210044

      Phone:025-58731025