To solve the problems of insufficient feature information extraction and low accuracy of prediction results when?using long short-term memory and attention mechanism to process text sequence information in machine reading comprehension, we propose a span -extracting machine reading comprehension model hybriding dynamic convolution attention. Considering that the current input and theprevious state of LSTM are independent of each other,which may lead to the loss of context information,the Mogrifier is adopted as theencoder,which makes the current input fully interact with the previous state several times, so as to enhance the significant structuralfeatures in the context and the problem and weaken the secondary features. Secondly,because the convolution kernel of static convolutionis the same,only the features of fixed length text can be extracted,which may hinder the machine from better understanding the text. Byintroducing dynamic convolution, one - dimensional convolution of multiple different convolution kernels is used to capture the localstructure of the context and the problem, which makes up for the disadvantage that the attention mechanism has only global captureability. Experimental results on SQuAD datasets show that compared with other models,the proposed method can effectively improve the model’ s ability in feature information extraction and answer prediction.