摘要:Teaching of creative software course is a process of constructive learning, in which students learn, observe, and practice the knowledge of the subject domain in depth, and then carry out subjective knowledge construction and practice. The source of software creation is usually the expression of the designer's inner creative impulse; The motivation for creative software is the engineering process and system implementation driven by the user's needs. The creative software course stimulates the vitality of software design through innovative teaching, while software engineering provides a theoretical and practical foundation for the realization of ideas. To this end, with the theme of the design of the general education course of AI-empowered software design, the course project is discussed in terms of concept design, idea generation and evaluation, and the design of an intelligent teaching assistant system is proposed for the three course objectives to allow creative people to design software, design software that can generate ideas, and design an environment that is convenient for generating ideas. By combining modules such as intelligent Q&A, course management, and automated interactive software implementation, the teaching process of teachers is optimized and support for students' personalized learning preference.
关键词:creative software;creative software design;large language models;constructivism
摘要:Software engineering has entered a new era of intelligence, agility, and value orientation. In view of the challenges of training a new generation of software talents, this paper firstly analyzes the core content of the new version of SWEBOK Guide V4, and discusses the changes and changes of software engineering. Secondly, following the new teaching concept of "knowledge-based, ability-oriented, and value-first", a new software engineering curriculum system was designed with reference to the new version of SWEBOK, focusing on high-level capabilities, AI embedding in the whole process, and carrying out value-led course ideological and political construction.
关键词:SWEBOK;software engineering education;AI+;ideological and political education in courses;software engineering body of knowledge
摘要:In order to improve teaching quality and cultivate students' computational thinking ability, a series of explorations have been conducted on the blended online and offline teaching mode of the Computer Programming Fundamentals (C language) course that integrates the OBE concept in teaching practice. The computer foundation course teaching team first conducted in-depth exploration of offline classroom teaching mode; Then, we carried out the construction of online teaching resources, proposed a method to guide students to learn efficiently online, and explored a two-stage online teaching model before and after class; Finally, four preliminary guiding principles that are in line with the actual situation of the school are proposed for the effective combination of offline and online teaching in the blended learning mode. After three academic years of reform, students' academic performance, knowledge application, and competitive level have all significantly improved; The teaching team has achieved significant results in the construction of undergraduate teaching quality engineering, and has published multiple research papers related to blended learning mode. The results of educational reform indicate that the blended learning mode is effective and can provide a valuable practical paradigm for the teaching reform of basic programming courses.
摘要:Automated counting of stacked plywood materials is a major challenge in industrial production. Traditional methods based on manual counting and physical counting are time-consuming and inefficient. However, stacked plywood images are often affected by factors such as uneven edges and irregular thickness, leading to inaccurate counting results with existing deep learning algorithms due to the lack of strong representational features extracted. To address these issues, we propose a self-supervised learning framework, CountNet, for counting stacked plywood materials. CountNet introduces a novel loss function that leverages the advantages of contrastive learning to further amplify the differences between positive and negative samples, enabling the network to extract more representative visual features. These features are then utilized in downstream tasks to achieve accurate counting. Experimental results demonstrate that the proposed method outperforms other common counting models in terms of accuracy, loss reduction, and various other metrics, showcasing its superiority in counting capability.
关键词:self supervised contrastive learning;computer vision;stacked plywood counting;data augmentation;optimization of loss function
摘要:Online recruitment has gradually replaced traditional offline recruitment and has become the preferred way for job seekers, but the emergence of false recruitment advertisements has brought great trouble to enterprises and job seekers, and seriously hindered the healthy development of online recruitment. In order to solve the problems of low detection accuracy and poor time efficiency of existing single machine learning models based on deep learning models, a false recruitment advertisement detection model based on feature fusion was proposed. Firstly, the attention mechanism is introduced to assign weights to each base classifier. Secondly, multiple base classifier features were horizontally fused to significantly improve the detection effect of fake recruitment advertisements. Experiments show that the accuracy of the feature fusion model is 1.78%, 1.67% and 1.16% higher than that of the machine learning model, BERT deep learning model and TextCNN deep learning model, respectively, and the running time of the model is slightly higher than that of the machine learning models but much lower than that of the deep learning model.Experiments on the EMSCAD dataset show that the feature fusion model has good performance in detecting false recruitment advertisements.
摘要:A temporal classification network MGDA-Net based on multi-scale gated convolution and deep attention is proposed to address the problems of insufficient capture of deep features in sequences and inadequate feature learning in existing temporal classification methods, effectively improving the accuracy of temporal classification tasks. MGDA-Net utilizes a multi-scale gated convolution module to capture multi-scale information, and enhances feature extraction capabilities by screening and regulating feature flow through gating mechanisms. Meanwhile, by utilizing deep attention modules, the spatial relationships between features can be further captured while preserving the relationships between channels, thereby enhancing the model's ability to learn important features; Finally, residual links are introduced to promote feature reuse and information flow. The experimental results showed that MGDA Net achieved the highest ranking and lowest average error on 20 time-series datasets, and improved classification accuracy by 2.3% to 10.5% on multiple high-dimensional datasets, demonstrating its effectiveness.
关键词:time series classification;multi-scale gated convolution;depth-wise attention;residual networks
摘要:With the development of industrial automation, the timely detection and screening of mechanical problems in large factories can avoid significant property and manpower losses, so it is of great significance to detect anomalies in machines by using sound. To solve the problems of unbalanced experimental data, insufficient feature extraction and low recognition accuracy caused by insensitivity to fault information, this paper proposes a multi-feature fusion method and an abnormal sound detection method based on interpolation convolutional autoencoder diagnosis. Firstly, a multi-feature extraction network is constructed, and based on extracting the original signal features, the log-Mel spectral features and the audio timing features are fused, as a new signal feature, the method learns more refined features of the audio and improves the abnormal sound detection performance. Secondly, an improved interpolation convolutional autoencoder is used for abnormal sound detection and diagnosis, targeting non-stationary sounds. The diagnosis of mechanical sound is more accurate; Finally, the proposed method was trained and validated on publicly available datasets ToyADMOS and MIMII. The results show that the proposed method has a higher accuracy in diagnosing abnormal sounds and it reaches 99%, which is a significant improvement compared to the results of traditional deep learning methods, which is 7.16% higher than the standard AUC score.
摘要:The loop closure detection algorithm in SLAM (Simultaneous Localization and Mapping) systems is primarily utilized to mitigate cumulative errors and optimize pose estimation. Currently, in scenarios involving changes in lighting conditions, variations in camera perspectives, and dynamic object presentations, loop closure detection algorithms exhibit insufficient robustness. Addressing the challenge of SLAM systems encountering skewed and directional changes in input images during variations in camera perspectives, which adversely affect the robustness of loop closure detection algorithms, a capsule network-based loop closure detection algorithm (SeqCNLCD) is proposed.Initially, the algorithm employs a capsule network to extract feature vectors from the current frame and historical frames. Subsequently, it computes the similarity between the current frame and historical frames. The similarity between two frames is considered as the similarity score for the image pair. Simultaneously, the similarity is input into a similarity score matrix. Using a sequence matching approach, the algorithm identifies the frame with the maximum sequence similarity as the optimal loop closure.Finally, on the Gardens Point dataset, SeqCNLCD demonstrates a 5.88% improvement in AUC (Area Under The Curve) compared to the SeqCALC algorithm. On the Campus Dataset, SeqCNLCD achieves an 11.27% higher AUC than SeqCALC. Experimental results indicate that SeqCNLCD exhibits high robustness in scenarios involving changes in camera perspectives.
摘要:Aiming at the problem that the sparseness of short text languages leads to difficulties in semantic analysis, a method combining two-channel feature fusion and adversarial training is proposed for short text classification. First, ChineseBERT is used for word embedding representation to address the challenge of sparse vocabulary in Chinese short text, followed by the introduction of FGM adversarial training technique to enhance the robustness and generalization ability of the overall model. Then, the semantic information is enriched by two-channel DPCNN and BiGRU for feature extraction, so that the model can better understand the meaning of the short text. In order to fully acquire and fuse feature information from different sources, a multi-attention mechanism is introduced to fuse the features as a way to improve the performance of the model. The model proposed in this paper is tested on two dataset,THUCNews and Today’s Headlines, and shows an improvement in accuracy,recall rate and F1 value compared with the traditional model, proving its effectiveness and feasibility in solving the problem of short text categorization, and providing an effective tool for solving the practical problem of short text categorization.
关键词:ChineseBERT;DPCNN;BiGRU;multi-head attention mechanism;feature fusion;adversarial training
摘要:Electroencephalography (EEG) signals are not easily artifacted and high temporal resolution, thus can more accurately respond to the real emotional state of human beings. Most of the works only adopted a single frequency domain, temporal or spatial feature, and they can’t comprehensively learn important information related to emotions. In order to solve these problems, we propose a novel EEG based temporal-frequency-spatial fusion network (E-SFTNet), specifically including a temporal-frequency network (TF-Net) based on bi-directional long short-term memory (BiLSTM) for learning temporal-frequency features and a spatial-frequency network (SF-Net) based on multiple convolutional and residual modules for learning spatial-frequency features. We conduct subject-dependent and subject-independent experiments on the SEED public emotion dataset to validate the performance of the model, achieving an accuracy of 96% and 85.66%, respectively. Experimental results show that E-SFTNet has a good performance in the EEG-based emotion recognition task, superior to the existing state-of-the-art methods. In addition, the activation of different emotions in different brain regions based on brain topographical maps was revealed, explaining the relationship between brain regions and emotions. Overall, this study provides a new idea for EEG-based emotion recognition.
摘要:Aiming at the problems of lengthy entity structure and entity nesting in unstructured data of opto-electronic enterprise information, this paper proposes a BERT-BiGRU-IDCNN-CRF model fused adversarial training. First, BERT model is used as pre-training to obtain dynamic word vectors containing contextual semantics, which are fused with the perturbations generated by adversarial training. Then the word vectors are imported into a bidirectional gated recurrent unit network (BiGRU) and an iterated dilated convolutional neural network (IDCNN) to extract features which are spliced. Finally, they are decoded by conditional random fields (CRF) in order to get target sequence. The model have adopted the People's Daily dataset and the MSRA dataset to verify the validity. Experiments show that the proposed model can effectively improve the accuracy, recall, and F1 value, as well as the stability and generalization ability, in which the F1 value of optoelectronic data set increases by 9.2 percentage points on the BiGRU-CRF benchmark model and by 12.6 percentage points on the IDCNN-CRF benchmark model.
关键词:named entity recognition;BiGRU;IDCNN;adversarial training
摘要:The rapid development of online education has accumulated a large number of learning records, making it possible to track and evaluate students' knowledge status. In view of the influence of individualized factors that do not consider students' knowledge level and forgetting behavior in the existing knowledge tracking models, a knowledge tracking model based on self-attention mechanism (RAKT) was proposed. Firstly, the Transform-decoder architecture based on separation was used to extract the dynamic knowledge state changes of students. Secondly, according to the students' knowledge reserve and forgetting coefficient, different attention weights were assigned to the historical interaction to obtain the students' history-related performance. Finally, the probability of students answering correctly at the next moment is comprehensively predicted. Experiments comparing with 6 models on 3 public datasets show that the AUC of the proposed model is increased by 3%~5% and ACC by 5%~8% compared with the comparison algorithm, and it performs well in simulating the individual learning trajectory of students.
摘要:The feature extraction capability of classic lightweight networks exhibits certain limitations. To address this, we investigate reparameterizable adaptive activation functions and propose the RepAct activation function, which introduces negligible additional computational cost during the inference stage. RepAct's adaptive reparameterizable multi-branch structure leverages branch-specific weights to adaptively learn diverse feature information, significantly enhancing the learning capacity of lightweight networks. RepAct demonstrates significant accuracy improvements for classification tasks across various classic lightweight CNN and ViT networks. For instance, on ImageNet100, the Top-1 accuracy of MobileNetV3-Small increases by 6.9%; on CIFAR100, the Top-1 accuracy improves by 5.71%, surpassing other mainstream activation functions. Furthermore, Grad-CAM visualizations reveal the mechanism behind RepAct's enhancement of the network's feature extraction capabilities.
摘要:With the widespread use of social media, the rapid spread of rumors poses a serious threat to the authenticity of information and social stability. Therefore, the development of efficient rumor detection methods to realize the automatic identification of the authenticity of social media content has become an urgent demand. However, due to the scarcity and scale limitation of rumor data set, the training and evaluation of rumor detection model are greatly restricted, and the effectiveness and generalization ability of the model are affected. To solve these problems, an implicit self-augmentation rumor detection model is proposed, which combines the semantic information extraction capabilities of RoBERT and ToBERT. First, merge the text content and related comments; Secondly, the sequential backbone method and fine-tuned BERT model are used to extract semantic features. Then, Mixup implicit self-augmentation technology is used in each level of model training to amplify the feature-level data, so as to improve the diversity of data and the generalization ability of the model. Finally, the classification layer based on time series modeling and the decision layer based on context modeling are used to complete the rumor detection task. Experimental results show that the accuracy of the proposed method on PHEME and Ma-Weibo data sets reaches 97.36% and 98.25%, which is 2.2% and 0.8% higher than that of the current optimal model.
关键词:rumor detection;social media;BERT model;Mixup data enhancement;deep learning
摘要:With the massive growth of IoT devices, the data they generate is also growing exponentially. Data only has value when it has acceptable quality, and noisy data is inevitable in massive amounts of data. A density peak clustering algorithm called Gini PSO-DPC based on Gini coefficient and particle swarm optimization algorithm is proposed to address this issue. Firstly, the optimal cutoff distance is calculated using the Gini coefficient based on all data points; Secondly, the particle swarm optimization algorithm is used to find K approximately optimal initial cluster centers and generate K initial category clusters; Finally, the sample data points are assigned to the corresponding category cluster based on the density of the nearest data point's category. The simulation experiment results show that the average accuracy of the Gini PSO-DPC algorithm reaches 96.81%, which is 2.44%, 0.89%, and 0.9% higher than the improved K-means, DMGA-FCM, and DPC algorithms, respectively; The average accuracy reached 94.3%, which was 1.22%, 2.02%, and 1.33% higher than the improved K-means, DMGA-FCM, and DPC algorithms, respectively. In the ablation experiment, the Gini PSO-DPC algorithm showed a more stable and reasonable cut-off distance parameter setting, shorter clustering time, indicating that the algorithm has stronger global search ability, higher adaptability, and better clustering effect.
关键词:Internet of Things;clustering algorithm;DPC;Gini-PSO-DPC;anomaly detection
摘要:Benefiting from the rapid development of Internet of Things technology and Edge Computing (EC) technology, vehicle edge computing (VEC) has gradually become a research hotspot. However, in the Internet of vehicles, end vehicles are limited by their own computing power and communication resources, making it difficult to execute compute-intensive or low-latency applications Therefore, it is very important to study an effective task unloading strategy for vehicle edge computing. Although task offloading can bring benefits to users, it also incurs extra task offloading costs, which are one of the most concerned issues for service buyers. This article considers the cost issue of task offloading and adopts Lyapunov optimization method to ensure the stability of the task queue. In the final solution process, the task offloading cost problem is transformed into a TSP problem. Based on simulated annealing algorithm, the Minimizing Cost Task Offloading Algorithm (MCTOA) is proposed for solving. The experimental results show that compared to other offloading algorithms, MCTOA can reduce task offloading costs by 25% and increase system throughput by nearly 20%. This indicates that this algorithm can effectively ensure the stability of task queues and minimize task offloading costs in the issue of vehicle networking task offloading costs.
摘要:Early diagnosis of lupus nephritis is crucial for the treatment and prognosis of patients. At present, traditional diagnostic methods rely on the clinical experience of doctors. In order to improve diagnostic efficiency and accuracy, a lupus nephritis auxiliary diagnostic model based on feature analysis and optimization techniques is proposed. To grade the progression of lupus nephritis, medical experts classified 485 patients as mild, moderate or severe based on clinical indicators. A feature selection method FSMACS based on adjusting cosine similarity is proposed to address the issue of redundant clinical indicators and improve model detection efficiency. Aiming at the problem of imbalanced data categories after annotation, a oversampling method called IBOA based on individual Bayesian imbalance impact index is proposed to reduce classification model errors. The experimental results show that the model optimized by FSMACS and IBOA performs well on various conventional classical classifiers, with accuracy, recall, F1 score, and geometric mean of 89.6%, 89.4%, 93.4%, and 92.1%, respectively, when using Adaboost for classification. This provides an efficient and accurate method for the auxiliary diagnosis of lupus nephritis patients.
摘要:To solve the problem of long construction time and poor quality of spatially accelerated structures constructed in complex voxel scenarios with uneven density distribution, a hybrid spatially accelerated structure based on density estimation and clustering is proposed. Based on the traditional bounding volume hierarchy, a continuous density environment is constructed for discrete voxel data through an improved kernel density estimation method. The high-density regions in the voxel space are determined based on the first- and second-order partial derivative equations of the kernel density estimation function and the clustering results. An adaptive jump flooding algorithm is proposed in this paper to construct the corresponding signed distance fields in these regions as leaf nodes of the accelerated structured tree. All computational processes are handed over to GPUs for implementation, thus meeting the demand for massively parallel computation in complex scenarios. The results show that the construction time of the spatially accelerated structure tree supported by this algorithm is significantly shortened and the search efficiency is improved in complex scenes with uneven voxel density distribution, which implies that this method can effectively improve the rendering speed of ray tracing.
关键词:spatial acceleration structure;kernel density estimation;density-based clustering;signed distance fields;ray tracing
摘要:Aiming at the problems of long path planning time, a large number of turning points, and poor path safety in most path planning algorithms based on grid maps, this paper proposes an improved jumping point search method. The algorithm mainly has the following two improvements: First, a new neighbor point selection strategy is proposed, which can dynamically select effective neighbor points according to the current search direction and the direction of the current node and the end point; second, improve the jump point selection mechanism , Because there must be jumping points around the obstacle, the current search direction and the direction of the obstacle are used to use the apex of the obstacle as the new jumping point. In order to verify the effectiveness of the improved jump point search algorithm proposed in this paper, we compared the improved algorithm with the jump point search algorithm in a simple environment and a complex environment through simulation experiments. The number of expanded nodes was reduced by 43.4% on average, the running time is reduced by an average of 19.15%, the number of turning points is reduced by 42.44%, and the path length is reduced by 2.12%. In order to further verify the safety of the path, we conducted experiments on the ROS robot. The results show that the improved jump point search algorithm is applied to the ROS robot, and the safety of the robot is good.
关键词:ROS robot;path planning;jump point search;dynamic neighbor point;path safety
摘要:In order to improve the response speed and processing efficiency of police work, the informatization construction of police systems has become increasingly important. In the past, police situation classification mainly relied on manual processing, requiring a large amount of manpower and material resources, and was prone to errors. To ensure that various types of police situations are properly classified and processed, design an intelligent eagle eye police situation analysis system based on the BERT model improvement. The BERT model can extract the relationship features of vocabulary in sentences, thereby capturing the semantic information of sentences more comprehensively. The system is capable of intelligently identifying different types of alarm situations, with a classification accuracy rate of over 94%; And it can aggregate and output multidimensional statistical results according to province, city, district, and police situation categories, and automatically match relevant police situation processing procedures, improving the analysis and decision-making efficiency of police work.
摘要:Addressing the issues of low efficiency and high training costs in current deep learning based layout analysis methods, this paper proposes a single-stage tar-get detection network RCW-YOLO, which is improved based on YOLOv5s. Firstly, by improving the C3 module in YOLOv5s with the Res2Net module, the network's ability of extracting multi-scale features from document images is improved. Secondly, the lightweight up-sampling operator CARAFE is used to optimize the feature fusion network, and to reduce information loss during the up-sampling process. Finally, WIoUv3 is adopted as the bounding box regression loss function, assigning more attention weights to samples of average quality to improve the model's generalization ability and overall performance. Experimental results show that RCW-YOLO achieves 87.2%, 76.4%, and 94.5% in mAP@0.50:0.95 on the CDLA, IIIT-AR-13K, and PubLayNet datasets, respectively. Compared with existing two-stage and other single-stage algorithms, RCW-YOLO has lower computational complexity and parameter count while maintaining excellent accuracy.
摘要:The optical imaging technology for large-scale or even whole brain microscopic imaging of mouse brain is limited by existing computer software and hardware technology. There are problems with poor real-time rendering and slow visualization processing time when modeling and visualizing GB level large-scale 3D vascular images. To this end, a multi-resolution visualization data method is proposed, which includes three aspects: tube multi-resolution modeling, cone visualization imaging, and IO high-performance optimization. Firstly, the imaging data is segmented into blood vessels, and then the Marching Cubes algorithm and patch reduction algorithm are used to perform multi-resolution modeling on the blood vessel data; Secondly, based on the idea of cone removal, a low-cost and high real-time multi-resolution visualization imaging is established; Finally, parallelization methods are used for high-performance optimization of IO. Experiments have shown that the proposed algorithm can quickly load and process 2.23 GB of 3D vascular data within 733 seconds, with frame rates and parallel efficiencies of 39 FPS and 23.71%, respectively. This not only improves the visualization speed and responsiveness of the model, but also overcomes the computational limitations in large-scale vascular data visualization, and better displays the vascular structure.
摘要:Traditional pointer-style mechanical water meters predominantly rely on manual reading and recognition processes, which are often time-consuming, incurs high labor costs, and are prone to high error rates. With the evolution of deep learning technologies, researchers have been applying these advancements in the field of water meter reading recognition. In this study, we propose a deep neural network-based algorithm for the recognition of readings from pointer-style mechanical water meters, referred to as the PWMR-DL algorithm. A specialized dataset for pointer-style mechanical water meters was constructed for the training and testing of the algorithm. To detect and correct for sub-meter dials, the Mask R-CNN model was employed to locate and segment the dials, coupled with an efficient correction strategy to rotationally adjust individual sub-dials, thereby enhancing the robustness of recognition across various rotational angles and reducing errors. During the sub-dial reading recognition phase, the CA (Channel Attention) mechanism was introduced to refine the EfficientNet model, which significantly improved reading accuracy. By increasing the classification dimension to 20 classes, the algorithm refines the precision of judgments when the dial pointer is situated between numerals. Furthermore, by incorporating a correction logic related to the sequence of sub-dial readings, an effective method for generating readings was designed, substantially reducing errors. Experimental results demonstrate that the PWMR-DL algorithm achieves a 2.4% increase in precision for sub-dial reading recognition compared to the pre-improved EfficientNet model, while only incorporating a small number of additional parameters, thereby preserving the model's lightweight characteristic. Notably, the PWMR-DL algorithm attained an overall recognition accuracy of 96.8% even under low-resolution imaging conditions.
关键词:computer vision;EfficientNet;pointer water meter;reading recognition;CA mechanism
摘要:Sparse CT image reconstruction is of great clinical significance for reducing patient radiation dosage and promoting imaging diagnosis. In deep learning-based medical image reconstruction tasks, existing methods often overlook the residuals between the reconstructed image and the ground truth, leading to structural errors and insufficient details in the reconstructed images. Generative adversarial networks (GANs) leverage adversarial learning to rapidly reconstruct global content and structural information. On the other hand, diffusion models, offer stable training and can reconstruct images with rich details. To improve the quality of sparse CT reconstruction images, a network named RRRNet that combines GAN and diffusion model is proposed. The network first utilizes a GAN as the primary generator to capture global structure information of images. Then, a diffusion model is applied to model the residual between the real data and the initial prediction, performing residual prediction to refine the initial prediction. In addition, a high-frequency information separation training module is introduced in the refinement process to enhance the recovery of edges and details. Validation on the LIDC dataset shows that at 4.50% sampling rate, RRRNet reaches 96.40%, 40.76dB and 32.49HU on quantitative metrics including SSIM, PSNR and MAE. Compared with using either GANs or diffusion models alone, RRRNet improves the quality of reconstructed images.
关键词:image reconstruction;deep learning;generative adversarial network;diffusion model
摘要:A comprehensive evaluation method based on type-2 fuzzy sets (T2 FSs) is proposed to reduce the evaluation of teaching quality in graduate computer courses to a fuzzy multi criteria group decision-making problem. Firstly, an evaluation index system is constructed for different evaluation subjects from five aspects: curriculum design, classroom participation, and teaching methods; Secondly, T2 FSs are used to describe the processing of natural language in response to the uncertainty and ambiguity in the evaluation process; Finally, the interval type-2 fuzzy weighted average operator is used to aggregate the attribute values and corresponding weights of the indicators, and then the projection based regret theory method is used to rank the evaluation results. This integrates the uncertain information in the data source into the final evaluation results for analysis, ensuring the accuracy and credibility of the evaluation results. Simulation experiments show that the proposed method can provide more accurate and reliable evaluation conclusions compared to evaluation methods based on type-1 fuzzy sets, confirming the feasibility and effectiveness of the method.
关键词:teaching quality evaluation;multi-criteria group decision-making;evaluation criteria system;type-2 fuzzy set;fuzzy weighted average operator
摘要:With the goal of building an effective discussion classroom, taking the computer system course as an example, based on students' real feedback on the discussion class, and focusing on the key issues of how students learn to discuss and how to learn during the discussion, an effective discussion classroom is constructed from the aspects of discussion problem design, dynamic adjustment of the implementation stage of the discussion class, and collaborative multi evaluation mechanism. Teachers and students participate in the pre class, in class, and post class stages of classroom discussions as a teaching community, promoting the improvement of students' growth oriented thinking and knowledge transfer abilities, and stimulating their participation and sense of gain. Analyzing the teaching data before and after the class, it was found that the average self-evaluation score of students on the learning effectiveness of discussion classes has increased by more than 55%, the quality of discussion problem solutions has improved by 45%, and effective participation has increased by 81%. This has improved teaching efficiency in student-centered classrooms.
摘要:Professional accreditation of engineering education is an internationally recognized system for ensuring the quality of engineering education, with the teaching quality monitoring system being its core component, playing an essential role in improving teaching quality and ensuring the quality of talent cultivation. Building and perfecting the teaching quality monitoring system is an important way for universities to enhance teaching quality and ensure the quality of talent cultivation. The paper discusses the construction of organizational system for teaching quality monitoring, institutional system for teaching quality monitoring, and evaluation and feedback system for teaching quality. As a result, the School of Computer Science at Inner Mongolia University has established a comprehensive and full process teaching quality assurance system consisting of a top-level decision-making system, organizational management system, supervision and evaluation system, and support system. After nearly three years of effective implementation, it has achieved the goal of talent cultivation and continuously improved the quality of training.
摘要:In order to promptly grasp the learning situation of students on MOOC platforms and provide targeted tutoring and intervention, this study utilized data mining techniques to analyze the learning behavior data of students in the "principles and applications of databases" MOOC course at the Army Engineering University. By collecting six behavioral characteristics, including test scores, chapter scores, and video watching duration, it was found that these characteristics are significantly correlated with academic performance. Using the K-means clustering algorithm, students were categorized into four learning types: ideal, diligent, top student, and at-risk, and an academic early warning model was constructed. Targeted early warning intervention measures were proposed to improve learning outcomes and avoid academic risks. This study provides a basis for educators to improve teaching methods and enhance teaching quality.
摘要:In the face of information explosion and intricate user demands, the optimization of recommendation systems is crucial. Large language models present a new opportunity in this regard. This paper contrasts traditional methods with recommendation systems under large language models, aiming to clarify their advantages and limitations, and offer robust references for future R&D strategies. This study initially reviews the evolution of traditional recommendation systems, including methods based on collaborative filtering, content, and knowledge graphs. Subsequently, it delves into the practical applications of large language models, such as BERT and the GPT series, especially their performance under different adjustments. Through comparative analysis, we observed notable differences between the two system paradigms in terms of performance, user experience, system complexity, and resource consumption. The recommendation method in the traditional mode is more suitable for scenarios with strong rules and relatively sparse and stable data, while the recommendation method under the large language model is more suitable for scenarios that need to understand complex semantics, provide innovative solutions, and generate dynamic content.
关键词:recommendation system;traditional model;large language models;comparative analysis
摘要:Image super-resolution reconstruction is an important research direction in the field of computer vision, with the main goal of restoring high-resolution images from low resolution images, improving image quality and clarity, and making images more usable in visual perception and information extraction. To this end, a comprehensive summary of image super-resolution reconstruction techniques based on deep learning was conducted, and existing models were analyzed and compared. Firstly, provide an overview of the background of image super-resolution reconstruction; Secondly, a detailed review was conducted on typical deep learning models and their principles; Next, analyze the shortcomings of existing research and outline the current key research. Introduce the achievements of researchers in reducing computational costs and enhancing model adaptability through pre training strategies, lightweight network design, optimization innovation, and other methods; Finally, a summary and outlook on future research trends are provided,in order to provide reference and inspiration for scholars in the field of deep learning image super-resolution reconstruction.