This study aims to predict expression of estrogen receptor (ER) in breast cancer by radiomics. Firstly, breast cancer images are segmented automatically by phase-based active contour (PBAC) method. Secondly, high-throughput features of ultrasound images are extracted and quantized. A total of 404 high-throughput features are divided into three categories, such as morphology, texture and wavelet. Then, the features are selected by R language and genetic algorithm combining minimum-redundancy-maximum-relevance (mRMR) criterion. Finally, support vector machine (SVM) and AdaBoost are used as classifiers, achieving the goal of predicting ER by breast ultrasound image. One hundred and four cases of breast cancer patients were conducted in the experiment and optimal indicator was obtained using AdaBoost. The prediction accuracy of molecular marker ER could achieve 75.96% and the highest area under the receiver operating characteristic curve (AUC) was 79.39%. According to the results of experiment, the feasibility of predicting expression of ER in breast cancer using radiomics was verified.
The accurate segmentation of breast ultrasound images is an important precondition for the lesion determination. The existing segmentation approaches embrace massive parameters, sluggish inference speed, and huge memory consumption. To tackle this problem, we propose T2KD Attention U-Net (dual-Teacher Knowledge Distillation Attention U-Net), a lightweight semantic segmentation method combined double-path joint distillation in breast ultrasound images. Primarily, we designed two teacher models to learn the fine-grained features from each class of images according to different feature representation and semantic information of benign and malignant breast lesions. Then we leveraged the joint distillation to train a lightweight student model. Finally, we constructed a novel weight balance loss to focus on the semantic feature of small objection, solving the unbalance problem of tumor and background. Specifically, the extensive experiments conducted on Dataset BUSI and Dataset B demonstrated that the T2KD Attention U-Net outperformed various knowledge distillation counterparts. Concretely, the accuracy, recall, Dice, and mIoU of proposed method were 95.26%, 86.23%, 85.09%, 83.59%and 77.78% on Dataset BUSI, respectively. And these performance indexes were 97.95%, 92.80%, 88.33%, 88.40% and 82.42% on Dataset B, respectively. Compared with other models, the performance of this model was significantly improved. Meanwhile, compared with the teacher model, the number, size, and complexity of student model were significantly reduced (2.2×106 vs. 106.1×106, 8.4 MB vs. 414 MB, 16.59 GFLOPs vs. 205.98 GFLOPs, respectively). Indeedy, the proposed model guarantees the performances while greatly decreasing the amount of computation, which provides a new method for the deployment of clinical medical scenarios.