Graph attention convolution for point cloud semantic segmentation

Posted by Packy on November 26, 2019

2019_CVPR_Graph Attention Convolution for Point Cloud Semantic Segmentation

主要问题: 细粒度的点云分割如何避免特征污染

难点: 如何为点云数据建立一种注意力机制,让点只关注于与其语义标签一致的其他邻域点。

方法: 提出了GAC网络, 去学习一个函数g :img

(F是点云的特征向量维度,K是输出的特征向量维度)

具体是利用点云特征和点云空间坐标与邻域点之差,来进行attention

对于一个点来说,其具有F维的特征,与其原始的空间坐标(x,y,z)

其中F维度的特征可以被MLP映射为K维度,

然后

对其邻域所有点做差得到img

然后利用一个注意力参数img

和原始的节点级别的注意力机制不同,这个有点像是通道级别的注意力机制。

self.a = nn.Parameter(torch.zeros(size=(all_channel, feature_dim)))

grouped_feature: sampled points feature [B, npoint, nsample, D]

attention = F.softmax(e, dim=2) # [B, npoint, nsample,D]

graph_pooling = torch.sum(torch.mul(attention, grouped_feature),dim = 2) # [B, npoint, D]

来自 https://github.com/yanx27/GACNet/blob/master/model.py

![To this end, we construct a sharing attention mechanism a : IR3+F IRK to focus on the most relevant part of the neighbors for feature learning, so that the convolution ker- nel of GAC can dynamically adapt to the structure of the objects. Specifically, the attentional weight of each neigh- boring vertex is computed as follows: äij = a(APij, e N (i) (1) äij,l, äij,2, e IRK indicates the at- where m - — tentional weight vector of vertex j to vertex i. Api — pj — pi,andAhij - Mg(hj) is a feature mapping function applied on each vertex, i.e., Mg is a multilayer perceptron. The first term of a indicates the spatial relations of the neighboring vertices, which helps to span the unordered neighbors to meaningful surface. The second term measures the feature difference between vertex pairs, which guides us to assign more attention to the sim- ilar neighbors. The sharing attention mechanism cy can be implemented with any differentiable architecture, we use a multilayer perceptron in this work (as shown in Figure 2), which can be formulated as follows: a(APij, Ahij) - where Il is the concatenation operation, Ma applied multilayer perceptron. (2) indicates the

Graph construction on a point cloud.