Reviewing the Categories of Journals in WOS and Scopus and Mathscinet bases under the Title Quartiles (Richard & Sun)

Document Type : Original Article

Authors

1 ociate Professor of Department of Statistics, Ferdowsi University of Mashhad, Mashhad, Iran.

2 Master of Economic and Social Statistics, Ferdowsi University of Mashhad, Mashhad, Iran.

Abstract

Introduction: Citation analysis plays an important role in research evaluation processes and their results are widely available to researchers for review and use. Citation is a scientific index that is used to evaluate the impact of science. The most important application of citation analysis is science policy making and research evaluation. In order to have a constant growth of coverage as well as to increase the accuracy in crediting for publication of large citation databases, the analysis of indexes of their scientific publications should be evaluated and reviewed by users and researchers to have appropriate measures for them. Scopus, WOS, and Mathscinet citation databases rank the journals using their own indexes and then classify them into four qualitative categories based on the quartile of their index. These categories represent the value of citing sources in the journals. One of the factors of choosing a journal to submit articles is the quality category or the ranking of that journal in citation databases. Therefore, the classification of journals is of great importance in terms of quality. The existing method in citation databases for categorizing journals is not a suitable method; Because from a statistical point of view, the statistical distribution of these indexes is not taken to find their quartile. In the research, we introduce citation databases in some related branches with references to their articles. In each database, the collection of journals is divided into four identical categories based on their indexes, which are: Q1 category includes journals with the highest value, Q2 category includes journals of the second category of values, Q3 categories include journals of the third category of values and Q4 categories include journals with the lowest value. These quadrants do not show the quality of the articles, but the quality of the journals in terms of citations. Scopus, WOS, and MathSicNet databases and their indexes are evaluated in the paper. The aim of this study is introducing the new methods for assessing the validity of journals in Scopus, WOS, and Mathscinet database. Also, the categories are compared by using descriptive statistics as well as different parametric and non-parametric statistical inference methods.
Methodology: Using official and registration statistics and by referring to Scopus, WOS and Mathscinet databases, extract the list of journals along with their ranking index. In other words, in this research, census is used as a statistical survey method. Excel 2013 spreadsheet was used to data registry. After collecting the data, by using parametric (based on statistical distribution) and non-parametric methods, their quartiles were calculated according to specific index for each database and quartile quality categories were defined and journals were categorized accordingly. Then, by using the contingency table and the Kappa agreement coefficient, the agreement was measured. SPSS (version 23) software was used to perform the Kappa agreement coefficient test, and EasyFit (version 5.5) software was used to obtain the quartiles by parametric method using their distribution. In this way, the classification of the database is compared with the classification obtained from two non-parametric and parametric methods. The significance level of the tests throughout this research is set as 5%.
Findings: The classification of statistical and mathematical journals in MathSciNet database based on the parametric and non-parametric methods has been compared as well as in two fields, pure and applied. The results show that the p-value of the Kappa agreement coefficient is less than 0.001, which is less than the significance level of the test, i.e. 0.05. So, it can be said that the Kappa coefficient for these categories is not zero. It is concluded that non-parametric and parametric methods have no difference in obtaining pure and applied categories. The amount of Kappa coefficient indicates a high agreement both for the classification of two non-parametric and parametric methods and for the pure and applied fields of this database. Also, in comparison of parametric and non-parametric methods of pure and applied fields in statistics based on the MCQ index in the Mathscinet database, the accuracy of these comparisons in pure and applied fields is equal to 79.44% and 97.11%, respectively, and the degree of misclassification 58.6% and 2.89% were obtained, respectively. The results show that the proposed methods to categorize the journals in each database, are more accurate and more efficient, and categorize the journals according to their actual validity.
Conclusion: The parametric method is more accurate than the classification method in the three databases, because the parametric method pays attention to the statistical distribution of the journals ranking index. Therefore, the parametric method, in addition to more accurate classification of journals based on quality, helps researchers to be able to more accurately and efficiently select the journal according to its qualitative category. Also in this paper, an attempt was made to provide a kind of classification for mathematics and statistics journals.

Keywords


صادقی‌گورجی، شهربانو، صال‌مصلحیان، محمد (1393). مقایسه MCQ با JIF۲ و JIF۵ برای مجلات ریاضی. پژوهشنامه پردازش و مدیریت اطلاعات، 29(4)، 15. بازیابی‌شده در 10 مهر 1401 از:  https://jipm.irandoc.ac.ir/article-1-2648-fa.pdf
فروغی، زهرا، طهماسبی لیمونی، صفیه، قیاسی، میترا (1399). مروری بر وضعیت شاخص‌های علم‌سنجی و انتخاب شاخص ارزیابی بروندادهای علمی در حوزه علوم پزشکی. تعالی بالینی، 9(4). بازیابی‌شده در 10 مهر 1401 از:  http://ce.mazums.ac.ir/article-1-498-fa.pdf
قنادی‌نژاد، فرزانه، حیدری، غلامرضا (1399). روش‌ها و شاخص‌های ارزیابی تولیدات علمی در علوم‌انسانی و اجتماعی: مرور نظام‌مند. پژوهش‌نامه علم‌سنجی، 6(12)،203-230. https://doi.org/10.22070/rsci.2020.4998.1341
فروغی، زهرا، طهماسبی لیمونی، صفیه، قیاسی، میترا (1399). مروری بر وضعیت شاخص‌های علم‌سنجی و انتخاب شاخص ارزیابی بروندادهای علمی در حوزه علوم پزشکی. تعالی بالینی، 9(4). بازیابی‌شده در 10 مهر 1401 از:  http://ce.mazums.ac.ir/article-1-498-fa.pdf
نوروزی چاکلی، عبدالرضا، راهجو، آمنه (1393). شناسایی و اعتبارسنجی شاخص‌های ارزیابی کیفیت نمایه‌های تخصصی در حوزه‌های موضوعی علوم پایه، مهندسی، کشاورزی، علوم انسانی، علوم پزشکی و هنر. پژوهشنامه پردازش و مدیریت اطلاعات، 29(4). بازیابی‌شده در 10 مهر 1401 از : https://jipm.irandoc.ac.ir/article-1-2540-fa.pdf
Abrizah, A., Zainab, A. N., Kiran, K., & Raj, R. G. (2013). LIS journals scientific impact and subject categorization: a comparison between Web of Science and Scopus. Scientometrics, 94(2), 721-740. doi: https://doi.org/10.1007/s11192-012-0813-7
Agresti, A. (2007). An Introduction to Categorical Data Analysis (Second ed.): John Wiley & Sons, Inc. doi: https://doi.org/10.1002/0470114754
Bensman, S. J., Smolinsky, L. J., & Pudovkin, A. I. (2010). Mean citation rate per article in mathematics journals: Differences from the scientific model. Journal of the American Society for Information Science and Technology, 61(7), 1440-1463. doi: https://doi.org/10.1002/asi.21332
Brzezinski, M. (2015). Power laws in citation distributions: evidence from Scopus. Scientometrics, 103(1), 213-228. doi: https://doi.org/10.1007/s11192-014-1524-z
Campanario, J. M. (2011). Empirical study of journal impact factors obtained using the classical two-year citation window versus a five-year citation window. Scientometrics, 87(1), 189-204. doi: https://doi.org/10.1007/s11192-010-0334-1
Efremenkova, V. M., & Gonnova, S. M. (2016). A comparison of Scopus and WoS database subject classifiers in mathematical disciplines. Scientific and Technical Information Processing, 43(2), 115-122. doi: https://doi.org/10.3103/S0147688216020088
Falagas, M. E., Kouranos, V. D., Arencibia-Jorge, R., & Karageorgopoulos, D. E. (2008). Comparison of SCImago journal rank indicator with journal impact factor. The FASEB Journal, 22(8), 2623-2628. doi: https://doi.org/10.1096/fj.08-107938
García, J. A., Rodriguez-Sánchez, R., & Fdez-Valdivia, J. (2011). Ranking of the subject areas of Scopus. Journal of the American Society for Information Science and Technology, 62(10), 2013-2023. doi: https://doi.org/10.1002/asi.21589
Garfield, E. (1972). Citation Analysis as a Tool in Journal Evaluation. Science, 178(4060), 471-479. doi: https://doi.org/10.1126/science.178.4060.471
González-Pereira, B., Guerrero-Bote, V. P., & Moya-Anegón, F. (2010). A new approach to the metric of journals’ scientific prestige: The SJR indicator. Journal of Informetrics, 4(3), 379-391. doi: https://doi.org/10.1016/j.joi.2010.03.002
Guz, A. N., & Rushchitsky, J. J. (2009). Scopus: A system for the evaluation of scientific journals. International Applied Mechanics, 45(4), 351. doi: https://doi.org/10.1007/s10778-009-0189-4
Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics, 106(1), 213-228. doi: https://doi.org/10.1007/s11192-015-1765-5
Richard, S., & Sun, Q. (2019). Analysis on MathSciNet database: some preliminary results. CoRR. doi: https://doi.org/10.48550/arXiv.1908.10282.
CAPTCHA Image