The vision behind the BIP! Services is the creation of an ecosystem of added-value tools and resources, which are based on advanced, explainable, and well-documented indicators that reflect a variety of aspects of the impact and merit of scientific works and researchers. They are built upon the data of popular scientific knowledge bases (e.g., Crossref, OpenCitations) and they aim to facilitate tasks like reading prioritization and responsible research assessment, adopting various suggestions made by relevant initiatives (e.g., DORA). This page summarizes all indicators which are available by various BIP! Services, along with explanations about their main intuition, the way they are calculated, and their most important limitations, in an attempt to educate the BIP! Services users and help them avoid common pitfalls and misuses.

Article-level Indicators

  • Impact Indicators

    • Popularity

      • Intuition: This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is based on the AttRank [1] method, which is designed to alleviate the bias against recently published articles (that other indicators like Citation Count or PageRank) have. It does so by incorporating an attention-based mechanism, inspired by the preferential attachment network growth model, where preferential attachment is applied on a recent snapshot of the network's graph. This way it models a researcher's preference to read papers which received a lot of attention recently.
      • Parameters: alpha: 0.2, beta: 0.5, gamma: 0.3, rho:-0.16, recent attention based on 3 most recent years (including current one), convergence error: 10^-12
      • Limitations: BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Finder (ranking, article details); BIP! Scholar; BIP! API; BIP! DB
      • Code: https://github.com/athenarc/Bip-Ranker
      • References:
        [1] I. Kanellos, T. Vergoulis, D. Sacharidis, T. Dalamagas, Y. Vassiliou: Ranking Papers by their Short-Term Scientific Impact. CoRR abs/2006.00951 (2020)
    • Influence

      • Intuition: This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is based on the PageRank [1] network analysis method. In the context of citation networks, PageRank estimates the importance of each article according to its centrality in the whole network. In contrast to other influence indicators (like Citation Counts), PageRank differentiates the importance of citations, based on the articles that make them (i.e., importance is not determined based on the mere number of citations), thus avoiding some trivial problems.
      • Parameters: alpha: 0.5, convergence error: 10^-12
      • Limitations: BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Finder (ranking, article details); BIP! Scholar; BIP! API; BIP! DB
      • Code: https://github.com/athenarc/Bip-Ranker
      • References:
        [1] R. Motwani L. Page, S. Brin and T. Winograd: The PageRank Citation Ranking: Bringing Order to the Web. Technical Report. Stanford InfoLab, 1999
    • Impulse

      • Intuition: This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is based on a time-restricted version of the Citation Count, where the time window length is fixed for all articles and the time window depends on the publication date of the article, i.e., only citations 3 years after each paper's publication are counted.
      • Parameters: years: 3
      • Limitations: BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Finder (ranking, article details); BIP! Scholar; BIP! API; BIP! DB
      • Code: https://github.com/athenarc/Bip-Ranker
    • Popularity-alt

      • Intuition: This is an alternative to the "Popularity" indicator (see above), which also reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is based on the RAM [1] method and is essentially a citation count where recent citations are considered as more important. This type of “time awareness” alleviates problems of indicators like Citation Count and PageRank, which are biased against recently published articles (new articles need time to receive a “sufficient” number of citations). Hence, RAM is more suitable to capture the current “hype” of an article.
      • Parameters: gamma: 0.6
      • Limitations: BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! API; BIP! DB
      • Code: https://github.com/athenarc/Bip-Ranker
      • References:
        [1] Rumi Ghosh, Tsung-Ting Kuo, Chun-Nan Hsu, Shou-De Lin, and Kristina Lerman. 2011. Time-Aware Ranking in Dynamic Citation Networks. In Data Mining Workshops (ICDMW). 373–380
    • Influence-alt

      • Intuition: This is an alternative to the "Influence" indicator (see above), which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is based on the Citation Count indicator, i.e., it is calculated as the total number of citations received by the article.
      • Limitations: BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! API; BIP! DB
      • Code: https://github.com/athenarc/Bip-Ranker
  • Usage Indicators

    • Views in BIP!

      • Intuition: The total number of unique article views (i.e., visits to the respective article details page) in BIP!.
      • Data & calculation: The data are taken directly from our database. We count how many "distict" users have visited the respective article details page. For non-logged in users, we assume that different IPs correspond to different users.
      • Limitations: It is possible that we double count visits (e.g., a user visiting the respective page both while being logged in and anonymously, or the visits of an unregisted user that visits the page multiple times from different IPs. This is an important drawback, hence using this indicator should be done with extreme caution. Also, the values of this indicator cannot be considered seriously unless the platform is widely used in the particular discipline of interest (otherwise the values may not be representative). BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Finder
    • Mendeley Readers

      • Intuition: The total number of users that have added the paper to their library in Mendeley.
      • Data & calculation: The data are gathered on-the-fly using the Mendeley API.
      • Limitations: BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Finder
      • References:
        [1] More details can be found here.

Researcher-level Indicators

  • Productivity Indicators

    • Number of Publications

      • Intuition: The total number of a researcher's articles, reflecting their productivity.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is done by counting the number of articles of the researcher of interest.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production.BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
    • Number of Datasets

      • Intuition: The total number of a researcher's datasets, reflecting their productivity.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is done by counting the number of datasets of the researcher of interest.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production.BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
  • Impact Indicators

    • Citations

      • Intuition: The total number of citations received by all articles of the researcher of interest.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is done by counting all citations attracted by the articles of the research of interest.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
    • H-index

      • Intuition: It is an estimation of the importance, significance, and broad impact of a researcher's cumulative research contributions.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . The h-index is the maximum value of h such that the given researcher has published at least h papers that have each been cited at least h times. Details in [1].
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
      • References:
        [1] Hirsch JE: An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A. 2005 November 15; 102(46): 16569–16572. doi: 10.1073/pnas.0507655102
    • i10-index

      • Intuition: This is a simple measure introduced by Google Scholar that helps gauge the productivity of a researcher.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . i10-index is the number of publications with at least 10 citations.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
    • Aggregated Popularity

      • Intuition: The sum of the popularity (current impact) scores of all articles of a researcher of interest.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is done by aggregating the Popularity scores (see in Section 'Article-level indicators') of the articles of the researcher of interest.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
    • Aggregated Influence

      • Intuition: The sum of the influence (total/overall impact) scores of all articles of a researcher of interest.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is done by aggregating the Influence scores (see in Section 'Article-level indicators') of the articles of the researcher of interest.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
    • Aggregated Impulse

      • Intuition: The sum of the impulse scores of all articles of a researcher of interest.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is done by aggregating the Impulse scores (see in Section 'Article-level indicators') of the articles of the researcher of interest.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
  • Open Science Indicators

    • Open Access Share

      • Intuition: The share (proportion) of articles of the researchers of interest that are open access.
      • Data & calculation: Citation data and article metadata required to calculate the particular indicator are gathered by various sources described here . Calculation is done by dividing the number of open access articles by the total number of articles of the researcher of interest.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. There are some articles for which the license is unknown (these are not considered in the share). BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production.BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
  • Career Stage Indicators

    • Academic Age

      • Intuition: It reflects the time that a scientist has been in the research field and performed active research.
      • Data & calculation: The academic age of a scientist is computed as the span of years from their first published work up until the present.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production. BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators
    • Fair Academic Age

      • Intuition: A variant of the academic age indicator that takes into consideration a researcher's inactive periods.
      • Data & calculation: Aggregates the time span (in months) of the researcher's inactive periods, and substracts this time period of the overall academic age.
      • Limitations: The papers of each researcher are gathered based on the public entries of their ORCID profile, hence the record may be incomplete. Note that the inactive periods are self-declared by the researchers themselves. BIP! software collects data from specific data sources (see more here ), which means that part of the existing literature may not be considered. Also, since some indicators require the publication year to be present for their calculation, we consider only DOIs for which we can gather this minimum piece of information from at least one data source. In addition, we discard papers with publication years greater than one year after the current year of our dataset's production date (they are considered as erroneous entries). For all time-based analyses, we set the value of the current year to the year following the year of the dataset's production.BIP! software treats to objects with distinct DOIs as distict entries; since multiple DOIs may refer to the same object (e.g., DOI aliases) it is possible that, in some cases, multiple entries may refer to the same object.
      • Availability: BIP! Scholar
      • Code: https://github.com/athenarc/bip-scholar-indicators