原图
原来凤姐最美的是眼睛,不是嘴啊-_-|||
drawMatches函数的参数如下
outImg = cv.drawMatches( img1, keypoints1, img2, keypoints2, matches1to2, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]] )
什么事匹配器对象?
matches = bf.match(des1,des2)结果返回一个列表,DMatch 有如下性质
SIFT暴力匹配和比率测试
这次试用BFMatcher.knnMatch()来找到k个最佳匹配。在这个例子当中,我们选择k=2,这样可以使用Lowe论文中的比率测试了。
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread("1.jpg",0)
img2 = cv2.imread("26.jpg",0)
sift = cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
img3=np.empty((300,300))
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good, img3,flags=2)
plt.imshow(img3),plt.show()
不咋准…-_-
基于FLANN的匹配器
FLANN的意思是快速最近邻搜索库。它包含一个对大数据集合和高维特征实现最近邻搜索的算法集合,而且这些算法是优化固偶读。面对大数据集时,效果要比暴力搜索好。
FLANN要传递两个字典作为参数。
第一个参数是使用的搜索算法,详细内容见此处
第二个参数是搜索次数,次数越多,结果越精确,但是速度也越慢。
mport numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread("1.jpg",0)
img2 = cv2.imread("26.jpg",0)
sift = cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0#kd树
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
# ratio test as per Lowe"s paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask=[1,0]
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3=np.empty((300,300))
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches, img3,flags=2)
plt.imshow(img3),plt.show()
效果没有ORB算法实现的好呀