水碓子网站建设莱芜论坛莱芜话题吕金梦

当前位置: 首页 > news >正文

水碓子网站建设,莱芜论坛莱芜话题吕金梦,微信推广引流加精准客户,展示网站开发一、本文介绍 作为入门性篇章#xff0c;这里介绍了ShuffleAttention注意力在YOLOv8中的使用。包含ShuffleAttention原理分析#xff0c;ShuffleAttention的代码、ShuffleAttention的使用方法、以及添加以后的yaml文件及运行记录。 二、ShuffleAttention原理分析 ShuffleA…一、本文介绍 作为入门性篇章这里介绍了ShuffleAttention注意力在YOLOv8中的使用。包含ShuffleAttention原理分析ShuffleAttention的代码、ShuffleAttention的使用方法、以及添加以后的yaml文件及运行记录。 二、ShuffleAttention原理分析 ShuffleAttention官方论文地址文章 ShuffleAttention官方代码地址官方代码 ​ ShuffleAttention注意力机制采用Shuffle单元有效地结合了两种类型的注意力机制。首先将通道维分组为多个子特征然后再并行处理它们。然后对于每个子特征利用Shuffle Unit在空间和通道维度上描绘特征依赖性。之后将所有子特征汇总在一起并采用“channel shuffle”运算符来启用不同子特征之间的信息通信。 ​ 三、相关代码 ShuffleAttention注意力的代码如下。 class ShuffleAttention(nn.Module):def init(self, channel512, reduction16, G8):super().init()self.G Gself.channel channelself.avg_pool nn.AdaptiveAvgPool2d(1)self.gn nn.GroupNorm(channel // (2 * G), channel // (2 * G))self.cweight Parameter(torch.zeros(1, channel // (2 * G), 1, 1))self.cbias Parameter(torch.ones(1, channel // (2 * G), 1, 1))self.sweight Parameter(torch.zeros(1, channel // (2 * G), 1, 1))self.sbias Parameter(torch.ones(1, channel // (2 * G), 1, 1))self.sigmoid nn.Sigmoid()def init_weights(self):for m in self.modules():if isinstance(m, nn.Conv2d):init.kaimingnormal(m.weight, modefanout)if m.bias is not None:init.constant(m.bias, 0)elif isinstance(m, nn.BatchNorm2d):init.constant(m.weight, 1)init.constant(m.bias, 0)elif isinstance(m, nn.Linear):init.normal(m.weight, std0.001)if m.bias is not None:init.constant(m.bias, 0)staticmethoddef channel_shuffle(x, groups):b, c, h, w x.shapex x.reshape(b, groups, -1, h, w)x x.permute(0, 2, 1, 3, 4)# flattenx x.reshape(b, -1, h, w)return xdef forward(self, x):b, c, h, w x.size()# group into subfeaturesx x.view(b * self.G, -1, h, w) # bs*G,c//G,h,w# channel_splitx_0, x_1 x.chunk(2, dim1) # bs*G,c//(2*G),h,w# channel attentionx_channel self.avg_pool(x_0) # bs*G,c//(2*G),1,1x_channel self.cweight * x_channel self.cbias # bs*G,c//(2*G),1,1x_channel x_0 * self.sigmoid(x_channel)# spatial attentionx_spatial self.gn(x_1) # bs*G,c//(2*G),h,wx_spatial self.sweight * x_spatial self.sbias # bs*G,c//(2*G),h,wx_spatial x_1 * self.sigmoid(x_spatial) # bs*G,c//(2*G),h,w# concatenate along channel axisout torch.cat([x_channel, x_spatial], dim1) # bs*G,c//G,h,wout out.contiguous().view(b, -1, h, w)# channel shuffleout self.channel_shuffle(out, 2)return out四、YOLOv8中ShuffleAttention使用方法 1.YOLOv8中添加ShuffleAttention模块 首先在ultralytics/nn/modules/conv.py最后添加ShuffleAttention模块的代码。 2.在conv.py的开头all 内添加ShuffleAttention模块的类别名 3.在同级文件夹下的init.py内添加SimAM的相关内容分别是from .conv import ShuffleAttention 以及在all内添加ShuffleAttention 4.在ultralytics/nn/tasks.py进行ShuffleAttention注意力机制的注册以及在YOLOv8的yaml配置文件中添加ShuffleAttention即可。 首先打开task.py文件按住CtrlF输入parse_model进行搜索。找到parse_model函数。在其最后一个else前面添加以下注册代码 elif m in {CBAM,ECA,ShuffleAttention}:#添加注意力模块没有CBAM、eca的M删除即可c1, c2 ch[f], args[0]if c2 ! nc:c2 make_divisible(min(c2, max_channels) * width, 8)args [c1, *args[1:]] 然后就是新建一个名为YOLOv8_ShuffleAttention.yaml的配置文件路径:ultralytics/cfg/models/v8/YOLOv8_ShuffleAttention.yaml

Ultralytics YOLO , AGPL-3.0 license

YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters

nc: 80 # number of classes scales: # model compound scaling constants, i.e. modelyolov8n.yaml will call CPAM-yolov8.yaml with scale n# [depth, width, max_channels]n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPss: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPsm: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPsl: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPsx: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbone backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 3, C2f, [128, True]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 6, C2f, [256, True]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 6, C2f, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 3, C2f, [1024, True]]- [-1, 1, ShuffleAttention, [1024]]- [-1, 1, SPPF, [1024, 5]] # 9# YOLOv8.0n head head:- [-1, 1, nn.Upsample, [None, 2, nearest]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 3, C2f, [512]] # 12- [-1, 1, nn.Upsample, [None, 2, nearest]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 3, C2f, [256]] # 15 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 13], 1, Concat, [1]] # cat head P4- [-1, 3, C2f, [512]] # 18 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 3, C2f, [1024]] # 21 (P5/32-large)- [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)其中参数中nc由自己的数据集决定。本文测试采用的coco8数据集有80个类别。 在根目录新建一个train.py文件内容如下 from ultralytics import YOLO import warningswith warnings.catch_warnings():warnings.simplefilter(ignore)model YOLO(ultralytics/cfg/models/v8/YOLOv8_ShuffleAttention.yaml) # 从YAML建立一个新模型results model.train(dataultralytics/cfg/datasets/coco8.yaml, epochs1,imgsz640,optimizerSGD) 训练输出​ ​​ 五、总结 以上就是ShuffleAttention的原理及使用方式但具体ShuffleAttention注意力机制的具体位置放哪里效果更好。需要根据不同的数据集做相应的实验验证。希望本文能够帮助你入门YOLO中注意力机制的使用。