• 2024 (Vol.38)
  • 1990 (Vol.4)
  • 1989 (Vol.3)
  • 1988 (Vol.2)
  • 1987 (Vol.1)

Study of impact of X-Ray imagery nature on keypoints detection and description quality

© 2020 M. O. Chekanov, O. S. Shipitko

Institute for Information Transmission Problems 127051 Moscow, Bolshoy Karetnyy Pereulok, 19, Russia
Moscow Institute of Physics and Technology (National Research University) 141700 Dolgoprudny, Institutskiy Pereulok, 9, Russia

Received 13 Jan 2020

In this work, we study the quality of keypoints detection and description algorithms (SIFT, SURF, ORB, BRISK, AKAZE) when working with digital X-ray and visible spectrum images. We also compare the quality metrics of algorithms when working with images of different spectra and robustness of algorithms to various image transformations. The quality of the algorithms is tested on images of the same object taken in the visible and X-ray spectra. Geometrical transformations (rotation, shearing, scaling), linear color transformations, Gaussian blur are applied to the images. Then the detection and description algorithms are applied to the original and transformed images. The repeatability and number of corresponding points are calculated for detection algorithms. The ratio of correctly matched descriptors as well as the ratio of the distances between query descriptor, the nearest descriptor, and the second nearest descriptor. The algorithms showed different behavior on different spectra. SURF has demonstrated to be the best X-ray keypoint detector, and AKAZE has become the best detector in the visible spectrum. SIFT is the best descriptor in both spectra. The strong and weak points of each algorithm are also discussed in the paper.

Key words: keypoint, keypoint detector, keypoint descriptor, repeatability, digital X-ray image

DOI: 10.31857/S023500922002002X

Cite: Chekanov M. O., Shipitko O. S. Issledovanie vliyaniya prirody rentgenovskikh izobrazhenii na kachestvo detektsii i deskriptsii osobykh tochek [Study of impact of x-ray imagery nature on keypoints detection and description quality]. Sensornye sistemy [Sensory systems]. 2020. V. 34(2). P. 156–171 (in Russian). doi: 10.31857/S023500922002002X

References:

  • Buzmakov A.V., Asadchikov V.E., Zolotov D.A., Roshchin B.S., Dymshits Yu.M., Shishkov V.A., Chukalina M.V., Ingacheva A.S., Ichalova D.E., Krivonosov Yu.S., D’yachkova I.G., Baltser M., Kassele M., Chilingaryan S., Kopmann A. Laboratornye mikrotomografy: konstruktsiya i algoritmy obrabotki dannykh [Laboratory Microtomographs: Design and Data Processing Algorithms]. Kristallografiya [Сrystallography]. 2018. V. 63 (6). P. 1007–1011 (in Russian)
  • Buzmakov A.V., Asadchikov V.E., Zolotov D.A., Chukalina M.V., Ingacheva A.S., Krivonosov Yu.S. Laboratornye rentgenovskie mikrotomografy: metody predobrabotki eksperimental’nykh dannykh [Laboratory X-ray Microtomography: Ways of Processing Experimental Data]. Izvestiya RAN. Seriya Fizicheskaya [Bulletin of the Russian Academy of Sciense: Physics]. 2019. V. 83 (2). P. 194–197 (in Russian).
  • Skoryukina N.S., Milovzorov A.N., Polevoy D.V., Arlazarov V.V. Metod raspoznavaniya ob’ektov zhivopisi v nekontroliruemyh usloviyakh s obucheniem po odnomu primeru [Paintings recognition in uncontrolled conditions using one-shot learning]. Trudy ISA RAN. 2018. V. 68(S1). P. 5–14. https://doi.org/10.14357/20790279180501 (in Russian).
  • Chekanov M.O., Shipitko O.S., Ershov E.I. Odnotochechnyy RANSAC dlya otsenki velichiny osevogo vrascheniya obekta po tomograficheskim proektsiyam. Sensornye systemy [Sensory systems]. 2020. V. 34 (1). P. 72–86 (in Russian).
  • Shemyakina Y., Zhukovskiy A., Faradjev I. Issledovanie algoritmov vychisleniya proektivnogo preobrazovaniya v zadache navedeniya na planarnyy obekt po osobym tochkam. Iskusstvennyi intellekt i prinyatie reshenii [Scientific and Technical Information Processing]. 2017. (1). P. 43–49 (in Russian).
  • Agarwal S., Snavely N., Simon I., Seitz S.M., Szeliski R. Building rome in a day. Communications of the ACM. 2011. V. 54 (10). P. 105–112.
  • Alcantarilla P.F., Bartoli A., Davison A.J. KAZE features. European Conference on Computer Vision. 2012. P. 214–227.
  • Andersson O., Reyna Marquez S. A comparison of object detection algorithms using unmanipulated testing images: Comparing SIFT, KAZE, AKAZE and ORB. Degree Project in Computer Science. Stockholm. 2016.
  • Bay H., Tuytelaars T., Van Gool L. Surf: Speeded up robust features. European conference on computer vision. 2006. P. 404–417.
  • Bradski G., Kaehler A. The OpenCV Library. Dr. Dobb’s journal of software tools. 2000. V. 25 (11). P. 120–128.
  • Haralick R.M., Shapiro L.G. Connected components labeling. Computer and robot vision. 1992. V. 1. P. 28–48
  • Hu J., Peng X., Fu C. A comparison of feature description algorithms. Optik. 2015. V. 126 (2). P. 274–278.
  • Lecron F., Benjelloun M., Mahmoudi S. Descriptive image feature for object detection in medical images. International Conference Image Analysis and Recognition. 2012. P. 331–338.
  • Lepetit V., Fua P. Keypoint recognition using randomized trees. IEEE transactions on pattern analysis and machine intelligence. 2006. V. 28 (9). P. 1465–1479.
  • Leutenegger S., Chli M., Siegwart R. BRISK: Binary robust invariant scalable keypoints. 2011 IEEE international conference on computer vision (ICCV). 2011. P. 2548–2555.
  • Lowe D.G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision. 2004. V. 60 (2). P. 91–110.
  • Mery D., Svec E., Arias M., Riffo V., Saavedra J. M., Banerjee S. Modern computer vision techniques for x-ray testing in baggage inspection. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2016. V. 47. № 4. P. 682–692.
  • Mikolajczyk K., Tuytelaars T., Schmid C., Zisserman A., Matas J., Schaffalitzky F., Kadir T., Van Gool L. A comparison of affine region detectors. International journal of computer vision. 2005. V. 65 (1–2). P. 43–72.
  • Moradi M., Abolmaesoumi P., Mousavi P. Deformable registration using scale space keypoints. Medical Imaging 2006: Image Processing. – International Society for Optics and Photonics. 2006. V. 6144. P. 61442G.
  • Rodríguez M., Facciolo G., von Gioi R.G., Musé P., Morel J.M., Delon J. Sift-aid: boosting sift with an affine invariant descriptor based on convolutional neural networks. IEEE International Conference on Image Processing. Institute of Electrical and Electronics Engineers. 2019. https://doi.org/10.1109/ICIP.2019.8803425.
  • Rublee E., Rabaud V., Konolige K., Bradski G.R. ORB: An efficient alternative to SIFT or SURF. ICCV. 2011. V. 11. № 1. P. 2–12.
  • Rosten E., Drummond T. Machine learning for high-speed corner detection. In European conference on computer vision. 2006. P. 430–443.
  • Scaramuzza D. Performance evaluation of 1-pointRANSAC visual odometry. Journal of Field Robotics. 2011. V. 28 (5). P. 792–811.
  • Shabanov A., Gladilin S., Shvets E. Optical-to-SAR Image Registration Using a Combination of CNN Descriptors and Cross-Correlation Coefficient. ICMV . 2020.
  • Song Z., Klette R. Robustness of point feature detection. International Conference on Computer Analysis of Images and Patterns. 2013. P. 91–99.