Abstract:Artificial Intelligence Generated Content (AIGC) technology offers a wide range of information generation services.However,the accurate assessment of AIGC quality is a critical issue that needs to be addressed.This study delves into the quality of images generated by large models and their evaluation metrics.First,it summarizes common methods for evaluating AIGC from a technical perspective,such as deep learning and computer vision approaches.The study introduces the metrics used in these evaluation methods,including accuracy,relevance,consistency,and interpretability,and examines their performance in evaluating diverse generated content.Then,to demonstrate the practical application of these evaluation metrics,this study conducts an evaluation experiment using images generated by ERNIE Bot as an example.Objective evaluation of the generated images is carried out through quantitative metrics like histograms and noise counts,while subjective evaluation focuses on the overall coordination and aesthetic appeal of the images.Finally,by comparing the results of objective and subjective evaluations,this study identifies highly reliable metrics for evaluating the quality of AIGC images,including color bias,noise count,and psychological expectations.This research provides a theoretical foundation for evaluating the AIGC quality and verifies the effectiveness and reliability of a combined approach using both objective and subjective metrics for AIGC product evaluation through experimental results.