Shortcut: WD:RBOT

Wikidata:Bot requests

From Wikidata
Jump to navigation Jump to search
Bot requests
If you have a bot request, add a new section using the button and tell exactly what you want. To reduce the process time, first discuss the legitimacy of your request with the community in the Project chat or in the Wikiprojects's talk page. Please refer to previous discussions justifying the task in your request.

For botflag requests, see Wikidata:Requests for permissions.

Tools available to all users which can be used to accomplish the work without the need for a bot:

  1. PetScan for creating items from Wikimedia pages and/or adding same statements to items
  2. QuickStatements for creating items and/or adding different statements to items
  3. Harvest Templates for importing statements from Wikimedia projects
  4. Descriptioner for adding descriptions to many items
  5. OpenRefine to import any type of data from tabular sources
On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2019/12.
Filing cabinet icon.svg
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 2 days.

You may find these related resources helpful:

High-contrast-document-save.svg Dataset Imports    High-contrast-view-refresh.svg Why import data into Wikidata.    Light-Bulb by Till Teenck.svg Learn how to import data    Noun project 1248.svg Bot requests    Question Noun project 2185.svg Ask a data import question


Redirects after archival[edit]

Request date: 11 September 2017, by: Jsamwrites (talkcontribslogs)

Link to discussions justifying the request
Task description

Retain the links to the original discussion section on the discussion pages, even after archival by allowing redirection.

Licence of data to import (if relevant)

Request process

Semi-automated import of information from Commons categories containing a "Category definition: Object" template[edit]

Request date: 5 February 2018, by: Rama (talkcontribslogs)

Link to discussions justifying the request
Task description

Commons categories about one specific object (such as a work of art, archaeological item, etc.) can be described with a "Category definition: Object" template [1]. This information is essentially a duplicate of what is or should be on Wikidata.

To prove this point, I have drafted a "User:Rama/Catdef" template that uses Lua to import all relevant information from Wikidata and reproduces all the features of "Category definition: Object", while requiring only the Q-Number as parameter (see Category:The_Seated_Scribe for instance). This template has the advantage of requesting Wikidata labels to render the information, and is thus much more multi-lingual than the hand-labeled version (try fr, de, ja, etc.).

I am now proposing to deploy another script to do the same thing the other way round: import data from the Commons templates into relevant fields of Wikidata. Since the variety of ways a human can label or mislabel information in a template such as "Category definition: Object", I think that the script should be a helper tool to import data: it is to be ran on one category at a time, with a human checking the result, and correcting and completing the Wikidata entry as required. For now, I have been testing and refining my script over subcategories of [2] Category:Ship models in the Musée national de la Marine. You can see the result in the first 25 categories or so, and the corresponding Wikidata entries.

The tool is presently in the form of a Python script with a simple command-line interface:

./ Category:Scale_model_of_Corse-MnM_29_MG_78 reads the information from Commons, parses it, renders the various fields in the console for debugging purposes, and creates the required Wikibase objects (e.g: text field for inventory numbers, Q-Items for artists and collections, WbQuantity for dimensions, WbTime for dates, etc.)
./ Category:Scale_model_of_Corse-MnM_29_MG_78 --commit does all of the above, creates a new Q-Item on Wikidata, and commits all the information in relevant fields.

Ideally, when all the desired features will be implemented and tested, this script might be useful as a tool where one could enter the

Licence of data to import (if relevant)

The information is already on Wikimedia Commons and is common public knowledge.


Request process

Crossref Journals[edit]

Request date: 27 March 2018, by: Mahdimoqri (talkcontribslogs)

Link to discussions justifying the request
Task description
  • Add missing journals from Crossref
Licence of data to import (if relevant)

Request process

elevation above sea level (P2044) values imported from ceb-Wiki[edit]

Request date: 6 September 2018, by: Ahoerstemeier (talkcontribslogs)

Link to discussions justifying the request
  • Many items have their elevation imported from the Cebuano-Wikipedia. However, the way the bot created the values is very faulty, especially due to inaccurate coordinates the value can differ by up to 500m! Thus most of the values are utter nonsense, some are a rough approximation, but certainly not good data. To make things worse - the qualifier with imported from Wikimedia project (P143) often wasn't added. For an extreme example see Knittelkar Spitze (Q1777201).
Task description

Firstly, a bot has to add all the missing imported from Wikimedia project (P143) omitted in the original infobox harvesting. Secondly, especially for mountains and hills, the value has to be set to deprecated state, to avoid it to poison our good date.

Licence of data to import (if relevant)

Request process


Most pages in (27209 pages) seem to lack items ( , currently 26641 pages).

I think it would be worth creating them as well as an item for the person subject of the article if it can't be matched with one of the exisiting items. --- Jura 07:43, 8 November 2018 (UTC)


To get this started I propose this structure for articles. It also mentions from which source each statement is imported. As I see it besides the structure for articles the structure for volumes and person subjects with imported data also needs to be decided. Additionally described by source (P1343) should probably be added to new and existing person subjects. --Pyfisch (talk) 22:29, 11 December 2018 (UTC)


I've made a preliminary data export. It contains all BLKÖ articles with GND, Bearbeitungsstand etc. The articles are linked based on the stated GND, Wikipedia and Wikisource articles, if there was a conflict multiple Q-numbers are given. I also searched for items linked to the article and unfortuanly found many that describe the person instead the of the text (they will need to be split). The last four columns state the date/place of birth/death from the text. The dates vary in accuracy:
  • year-month-day, year-month, only year
  • ~ before date describes imprecise dates
  • > before describes dates stated as "nach 1804"
  • A before dates describes "Anfang/erste Tage" start of
  • E before dates describes "Ende/letzte Tage" end of
  • M before dates describes "Mitte" middle of
  • ? BLKÖ knows the person was dead but does not know when he/she died

The places will need to be manually matched to Q-items. The first column contains some metadata about the kind of page. There are:

  • empty: Person
  • L: Liste
  • F: Family, Wappen, Genealogie
  • R: Cross Reference
  • P: Prelude
  • H: note about names and alternate spellings
  • N: corrections, Nachträge

Each group should get a distinct is-a property. @Jura1: Do you like it? This is just for viewing, a later version will be editable to make manual changes before the import. --Pyfisch (talk) 22:14, 18 December 2018 (UTC)

  • I like the approach. BTW, there is Help:Dates that attempts to summarize how to add incomplete dates. --- Jura 14:05, 20 December 2018 (UTC)
    • editable data export. Updated the exported data. The sheet "articles" is already cleaned up. But I need help to match the ~4000 place names in the sheet "places" to Wikidata Q-Items. --Pyfisch (talk) 16:07, 22 December 2018 (UTC)
  • @Pyfisch: thanks a lot for your proposal! Are there any plans to realize this? --M2k~dewiki (talk) 07:16, 10 July 2019 (UTC)
@M2k~dewiki: Yes, the data is already prepared for the import, but I have not gotten around to writing an import script, getting approval and running the script. --Pyfisch (talk) 09:07, 11 July 2019 (UTC)
    • You could do the upload with QuickStatements --- Jura 12:24, 19 July 2019 (UTC)

Cleanup VIAF dates[edit]

Task description

There are a series of imports of dates that need to be fixed, please see Topic:Un0f1g1eylmopgqu and the discussions linked there, notably Wikidata:Project_chat/Archive/2018/10#Bad_birthdays with details on how VIAF formats them. --- Jura 05:28, 14 November 2018 (UTC)

Licence of data to import (if relevant)
  • Is anyone interested in working on this problem? I think it's a real issue, but it needs attention from someone who can parse the VIAF records and that's certainly not me. - PKM (talk) 21:33, 16 March 2019 (UTC)
  • Yeah, it would be good. --- Jura 12:25, 19 July 2019 (UTC)

import writers[edit]

When adding values for screenwriter (P58), I notice that frequently these persons don't have Wikidata items yet.

It would be helpful to identify a few sources for these and create corresponding items. Ideally every tv episode would have its writers included. --- Jura 15:05, 18 November 2018 (UTC)

It would be beneficial if informations like if the writer wrote just the teleplay or the story would be stated.--CENNOXX (talk) 07:19, 12 April 2019 (UTC)
  • At this stage, the idea is to simply create items for writers, not adding them to works. --- Jura 12:26, 19 July 2019 (UTC)

Import Schizosaccharomyces pombe protein coding genes[edit]

Request date: 6 December 2018, by: Anlock (talkcontribslogs)

Link to discussions justifying the request
Task description

The PomBase database manually curate and maintain the coding inventory of the S. pombe genome. I would like to upload the protein coding genes of S. pombe as per this request

The dataset is located here:

Licence of data to import (if relevant)

Creative Commons Attribution 4.0 International license (CC-BY)


@Anlock: using your spreadsheet I could create these gene items via QS, together with minimal protein items from what is in the sheet. I cannot maintain these items, e.g. RefSeq/ENSEMBL gene links or later UniProt updates. I would put the Pombase name as gene alias, and put what you give as P742 as protein alias. --SCIdude (talk) 15:01, 13 October 2019 (UTC)

Request process

Add original title of scientific articles[edit]

There are some articles, that have title (P1476) value enclosed in square bracket. This means that the title is translated to English and the article's title wasn't in English.


Generally, the following should be done:

  1. deprecate existing P1476 statement
  2. add the original title with title (P1476)
  3. add the label in the original language
  4. remove [] from the English label

--- Jura 11:03, 11 December 2018 (UTC)

Research_Bot claims to do this under Maintenance Queries but I still see a lot of research papers with this issue. I might work on a script for this to try and figure out how to make a bot. Notme1560 (talk) 18:17, 21 March 2019 (UTC)
I have created a script for this task. source and permission request --Notme1560 (talk) 20:39, 23 March 2019 (UTC)
  • It seems there may be some 5 out of 220 needing this. --- Jura 17:22, 26 August 2019 (UTC)

Reviews in articles[edit]

When doing checks on titles, I found some items with P31=scholarly article (Q13442814) include an ISBN in the title (P1476)-value.

Sample: Q28768784.

Ideally, these would have a statement main subject (P921) pointing to the item about the work. --- Jura 19:10, 13 December 2018 (UTC)


@Jura1: I’ve been manually cleaning up a few of these. Some comments on the process from my perspective:

- PKM (talk) 01:33, 4 March 2019 (UTC)

    • Sure, it's possible to take this a step further. --- Jura 11:09, 10 March 2019 (UTC)
      • I'd use genre (P136) for book review (Q637866), not P31. --- Jura 11:57, 25 August 2019 (UTC)
        • I would be happy with either, but are there advantages or drawbacks about using one approach over the other?
        • Adding the version, edition, or translation (Q3331189) as the main subject of the book review sounds sensible. Doing a bot run to find items with an ISBN in the title and mark them as book reviews (either P31 or P136) should be pretty reliable. Richard Nevell (talk) 09:16, 27 August 2019 (UTC)
"$" in the title also finds some. --- Jura 11:55, 25 August 2019 (UTC)
  • Maybe the first step is to identify them as book reviews, then find the work that is being reviewed. --- Jura 11:57, 25 August 2019 (UTC)
  • I identified some items by adding book review (Q637866). Talk:Q637866 has some queries. Not all items include details of the book that was reviewed. --- Jura 17:25, 26 August 2019 (UTC)

Patronage/clientèle patronage (P3872), rank-preferred for latest year available[edit]

Request date: 1 January 2019, by: Bouzinac (talkcontribslogs)

Link to discussions justifying the request
Task description

Update any element with P3872, if there is (1+) years, up (preferred)-rank the latest year (and only when year precision, not months, etc). Down (normal)-rank other years if present. For instance, see

And this should be executed one time per year (as there might be new data) Thanks a lot!

Licence of data to import (if relevant)
Request process

Import Treccani IDs[edit]

Request date: 6 February 2019, by: Epìdosis (talkcontribslogs)

Task description

At the moment we have four identifiers referring to Dizionario biografico degli italiani ID (P1986), Treccani ID (P3365), Enciclopedia Italiana ID (P4223), Dizionario di Storia Treccani ID (P6404). Each article of these works has, in the right column "ALTRI RISULTATI PER", a link to the articles regarding the same topic in other works (e.g. Ugolino della Gherardesca (Q706003) Treccani ID (P3365) conte-ugolino, has links also to Enciclopedia Italiana (Enciclopedia Italiana ID (P4223) and Dizionario di Storia (Dizionario di Storia Treccani ID (P6404)). This cases are extremely frequent: many items have Dizionario biografico degli italiani ID (P1986) and not Treccani ID (P3365)/Enciclopedia Italiana ID (P4223); others have Treccani ID (P3365) and not Enciclopedia Italiana ID (P4223); nearly no item has Dizionario di Storia Treccani ID (P6404), recently created.

My request is: check each value of these identifiers in order obtain values for the other three identifiers through the column "ALTRI RISULTATI PER".


Magyarország közigazgatási helynévkönyve, 2018. január 1. (hungarian)[edit]

Request date: 12 May 2019, by: Szajci (talkcontribslogs)

Link to discussions justifying the request
  • Sziasztok! Ezen a linken ([3]) elérhető a Magyarország közigazgatási helynévkönyve, 2018. január 1. című kiadvány. Van lehetőség arra, hogy a wikidatába beírja egy robot az adatokat? Kérlek titeket, írjon valaki valami biztatót
Task description
Licence of data to import (if relevant)

Request process

Bot or template to add in Findagrave entries for new people not in Wikidata and for cemeteries not in Wikidata[edit]

Request date: 22 May 2019, by: Richard Arthur Norton (1958- ) (talkcontribslogs)

Task description
  • I would like it if I could have the ability to type in a Findagrave number and a bot/template would migrate the Findagrave data into a new Wikidata entry, it would also do a search on that Findagrave number to make sure we do not already have it in Wikidata. For instance I had to migrate by hand.
  • We need the same for cemeteries not in Wikidata already. We really should have ALL cemeteries in Wikidata from Findagrave using a Mix and Match scenario, which I have never used before. I would take responsibility of making sure they are not duplicates. If the cemetery is already in Wikidata it adds in the Findagrave number. If not it creates a new entry. --RAN (talk) 23:16, 22 May 2019 (UTC)

I have created an import from a CSV file before, to add certain data to WikiData. In my case it were tennis players missing two properties with qualifiers, that were time consuming to enter through the web-interface, but the same thing could be done for graves and cemetries. It looks like we can get some of the data straight from the website, if we just have a list of cemetries we want to create. Would that be an idea? Edoderoo (talk) 14:48, 8 June 2019 (UTC)

Request process

Optimize format of country items[edit]

Given that these items get larger and larger, it might be worth to review their structure periodically and optimize their format, e.g. by moving references to separate items. Check for duplication, etc. --- Jura 13:33, 14 June 2019 (UTC)

Related: --813gan (talk) 17:37, 17 June 2019 (UTC)

  • If <stated in>: <bla> is sufficient to locate the information within <bla>. I don't think all elements from the item <bla> should be repeated in the reference. --- Jura 14:47, 25 June 2019 (UTC)

Uploading Data to Retrosheet IDs: Q64615865[edit]

Request date: 14 June 2019, by: Kbschroeder84 (talkcontribslogs)

Link to discussions justifying the request

Task description

This is retrosheet IDs for baseball players. This is the largest depository of baseball data.

I received a copy/Google Sheet with the data, and I'm currently getting the Qid's to the baseball player's in the sheet.
Next steps will be to create a Property and import the data to this property.
Licence of data to import (if relevant) N/A

Request process

Accepted by (Edoderoo (talk) 19:58, 16 June 2019 (UTC)) and under process
Task completed (10:49, 25 July 2019 (UTC))

  • Please let me know if this format is correct (enough). Theoretically, the value can be transformed to a URL to the RetroSheet-site, but maye the format of the item needs to be reformatted then. Please contact me if I can be of any help here.

Removing invalid Billboard artist ID (P4208) statements[edit]

Request date: 17 June 2019, by: Tinker Bell (talkcontribslogs)

Link to discussions justifying the request
Task description

Remove all Billboard artist ID (P4208) statements matching the regex [0,9]{6}\/.{0,}


I'm checking right now how many of these claims are left. If that looks good, then deleting them is adding a line of code. See my script here. Edoderoo (talk) 12:55, 19 June 2019 (UTC)

Request process

Accepted by (Edoderoo (talk) 15:29, 19 June 2019 (UTC)) and under process
There was one item protected against vandalism, that blocked my script. Now it finished it completely.
Task completed (12:31, 23 June 2019 (UTC))

Edoderoo, thanks, but, there are many cases matching the regex in Wikidata:Database_reports/Constraint_violations/P4208#"Format"_violations? The last update was at June 30 --Tinker Bell 02:39, 6 July 2019 (UTC)
The request was for six numbers (see the RegEx-example), but the ones left have seven digits. I see they don't work either, so my script runs again, now for 7 digits. Edoderoo (talk) 07:44, 6 July 2019 (UTC)
Let's now wait for the constraint-report to update. There must be progress. Edoderoo (talk) 11:56, 6 July 2019 (UTC)
Now there is only a few issues left, that can be handled best manually. Edoderoo (talk) 07:29, 19 July 2019 (UTC)
Edoderoo, thanks! --Tinker Bell 20:59, 18 August 2019 (UTC)

References to unreferenced citations[edit]

Request date: 18 June 2019, by: JJBullet (talkcontribslogs)

Link to discussions justifying the request


Task description

Add references to un-referenced citations

Licence of data to import (if relevant)

Request process


Request date: 20 June 2019, by: JJBullet (talkcontribslogs)

Link to discussions justifying the request
Task description

Add references to un-referenced citations, changes categories when incorrect, adds site links to enwiki, and lastly creates items if needed.

Licence of data to import (if relevant)

Request process

Add description to items about articles[edit]

SELECT ?item
	?item wdt:P31 wd:Q13442814 . 
    OPTIONAL { ?item schema:description ?d . FILTER(lang(?d)="en") }
    FILTER( !BOUND(?d) )

Try it!

I seem to keep coming across articles that lack descriptions. If they had long titles, that wouldn't matter, but it's happens with articles that could be mistaken for items about topics. As I can't query them efficiently and just add descriptions with quickstatements/descriptioner, maybe a bot could run the above query every few minutes or so (once query server lag is gone) and add basic descriptions. If the standard description collides with another item, please add some variation. --- Jura 14:43, 24 June 2019 (UTC)

In English, most get a description during the import. But for people working on the other 300 language wiki's this ain't no help ;-) For Dutch I have given a big load of items a description already. A tool that can also be of help is Descriptioner. If you copy your query in here, it can set the descriptions for you in the background. Edoderoo (talk) 09:13, 25 June 2019 (UTC)
I'm aware of that. I just did a few with SELECT ?item { ?item wdt:P31 wd:Q13442814 } OFFSET n LIMIT 50000
Surprisingly, I even got up to offset 3,000,000. Still, even with this approach, a bot might be the better choice.
The other query needs to do even smaller steps of to avoid timeout.
Maybe there is a better way to identify them.--- Jura 11:02, 25 June 2019 (UTC)
I almost got to offset 4000000 before facing a timeout in descriptioner as well. --- Jura 11:50, 25 June 2019 (UTC)
and now the initial query times-out too. --- Jura 13:46, 25 June 2019 (UTC)


set preferred rank[edit]

When looking at the data for the previous request, it occurred to me that maybe the same as above should be done for Human Development Index (P1081) (most recent value should have preferred rank, all others normal rank). However, I don't use it myself and it's a different type of data. @IvanP: who seem to have worked with it. --- Jura 15:35, 29 June 2019 (UTC)

@Jura1: I just wanted to note that HDI estimates for certain years have changed, e.g., the HDI of Germany in 1995 was given as 0.830 at the time I added the value to Wikidata, now it is 0.834. Bodhisattwa added current estimates but the outdated ones should be deleted. (I am not familiar with OpenRefine yet and actually did the HDI stuff manually back then. 😲) -- IvanP (talk) 16:33, 29 June 2019 (UTC)
SELECT (URI(CONCAT("",strafter(str(?item), "y/"),"#P1081")) as ?click) 
        ?year ?v ?url ?rank 
    BIND(wd:Q1025 as ?item) 
    ?item p:P1081 ?st . 
    ?st ps:P1081 ?v .
    OPTIONAL { ?st prov:wasDerivedFrom/pr:P854 ?url }
    OPTIONAL { ?st pq:P585 ?year } .
    OPTIONAL { ?st prov:wasDerivedFrom/pr:P248 ?statedin } .
    ?st wikibase:rank ?rank 
ORDER BY ?year

Try it!

An additional problem then. For some years we have multiple values and from the statements it's hard to say which one is which (see query above). The question is if they should be deleted, get deprecated rank or some "criterion used"-qualifier value (e.g. provisional).
Good thing 2017 has just one value ;). So we can set that preferred while sorting out the rest. --- Jura 23:28, 29 June 2019 (UTC)

Hi! The old value should be deprecated with "reason deprecation" = item/value with less accuracy (Q42727519) (check Help:Deprecation ). I'm operating the WDBot for the property nominal GDP (P2131). I use "retrieved" to note when the data was retrieved. If after time there is some revision for a old value, then it is easy to check which one is the most actual and the old one can be deprecated. Cheers! Datawiki30 (talk) 19:10, 1 July 2019 (UTC)

fix multiple values per year[edit]

still todo --- Jura 23:24, 19 July 2019 (UTC)

"This work/study/research was supported"[edit]

Request date: 8 July 2019, by: Steak (talkcontribslogs) More than 200 journal article items have in the title a phrase like "This work was supported...". I all cases I checked, this was errorously used as a title part.

Task description

Can a bot recrawl the correct journal titles or at least remove this silly phrase? Steak (talk) 20:28, 8 July 2019 (UTC)

Licence of data to import (if relevant)

Request process

Monthly number of subscribers[edit]

At Wikidata:Property proposal/subscribers, there is some discussion about various formats for the number of subscribers. For accounts with many subscribers, I think it would be interesting to gather monthly data in Wikidata.

Using format (D1) this could be added to items such as Q65665844, Q65676176. Initially one might want to focus on accounts with > 100 or 50 million subscribers. Depending on how it goes, we could change the threshold.

I think ideally the monthly data would be gathered in the last week or last ten days of the month. --- Jura 14:22, 19 July 2019 (UTC)

Adding the identifiant NosDéputé (P7040)[edit]

Request date: 20 July 2019, by: Tyseria (talkcontribslogs)

Link to discussions justifying the request
  • No discussion
Task description

Hi, is it possible that a bot adding the NosDéputé identifiant (P7040) at pages linked at member of the French National Assembly (Q3044918) and 15th legislature of the Fifth French Republic (Q24939798)/14th legislature of the Fifth French Republic (Q3570385)/13th legislature of the Fifth French Republic (Q3025921) ?
Examples with differents names :

Sorry it's my first request and I do not know how to do it :) Thanks!

Licence of data to import (if relevant)

Request process

Accepted by (Premeditated (talk) 19:12, 24 July 2019 (UTC)) and under process
Task completed (21:35, 25 July 2019 (UTC)) by Premeditated

I have added all of the people in office from 2017-today. The deputies from the periods 2007-2012 and 2012-2017 is on sub-sites, but it's a redirect on the site. So that is not a problem. Working on 2012-2017 now. - Premeditated (talk) 19:12, 24 July 2019 (UTC)

Thanks a lot Premeditated. It's also possible to do it for NosSé identifier (P7060) ? -- Tyseria 20:51, 24 July 2019 (UTC)
No problem. I have now added all of the deputies. The script added 1,300 statements, so around 500 sites needs to be manually added. The script is getting a little confused when there exists two or more politician with the same name (like w:fr:Maurice Vincent (homme politique, 1886-1961) and w:fr:Maurice Vincent (homme politique, 1955)), and there is also some naming confusion. Working on adding NosSé identifier (P7060). Consist of around 800 entries.
I think we need to take into account the deputies who have been re-elected, which probably reduces the size of the pages to list manually. And how do you know on which pages the property have to be added manually? Thanks, -- Tyseria 17:46, 25 July 2019 (UTC)
@Tyseria: I have good news. The list had a lot of duplicate values, so there is not that many missing after all. The script added around 700 statement with NosSé identifier (P7060). So, missing ca. 100 entries. I will include the lists below. - Premeditated (talk) 21:34, 25 July 2019 (UTC)
Manual list

Add has part (P527) and part of (P361) to Romanian monuments[edit]

We seem to have plenty of items for monuments in Romania, where one is for the entire group (e.g. Q18545143) and several for each part (e.g. the ones Q18545143#P527). I think it would be helpful if these were linked more systematically. --- Jura 12:05, 21 July 2019 (UTC)

Protein aliases[edit]

Request date: 27 July 2019, by: SCIdude (talkcontribslogs)

Link to discussions justifying the request
Task description

The set of protein objects contained "(en)protein" aliases, this was fixed but as you can see from Q24248974 there is still the arabic بروتين and who knows which other language aliases with the exact translation "protein". Optionally, the same object has a "hypothetical protein" alias which should be moved to descriptions (and do the same to all proteins with that alias), and as third option remove the "Listeria gene" alias which is completely nonsensical (proteins with an alias of form taxon+"gene" would be the general target).

Licence of data to import (if relevant)

Request process

Commons township categories[edit]

Request date: 27 July 2019, by: Magog the Ogre (talkcontribslogs)

Link to discussions justifying the request
Task description
  • Occasionally, the linkage will fail because the pages are linked elsewhere. I expect that, and will provide a report to myself so I can either manually fix them or let them be.
Licence of data to import (if relevant)
  • N/A
@Magog the Ogre: so why don't you do it yourself? Why are you asking someone else to do it for you? Shouldn't this be on Wikidata:Requests for permissions/Bot? Multichill (talk) 19:30, 27 July 2019 (UTC)
I thought this was the section to request a bot. Magog the Ogre (talk) 20:41, 27 July 2019 (UTC)
Request process

Start days of calendar months[edit]

For each instance of calendar month of a given year (Q47018478) (Gregorian), can someone please apply the correct value of instance of (P31) from this list:

This will be analogous to common year starting on Saturday (Q235673) et seq. and will be used by Template:Wikidata Infobox. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:56, 30 July 2019 (UTC)

Started one batch which infers this information from the first day of the month. But this doesn't cover everything. --Matěj Suchánek (talk) 10:03, 31 July 2019 (UTC)
@Matěj Suchánek: Thank you - though I think that link does not go where you intended. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:30, 31 July 2019 (UTC)
Sorry, now it should. --Matěj Suchánek (talk) 19:46, 31 July 2019 (UTC)

Complete/fix items created by SourceMD[edit]

Many of the items created by the tool have labels with a middle initial lacking a ".". This is present in some of the sources listed. Also, it might be possible to complete these items with aliases and other identifiers from some of the references provided. --- Jura 10:13, 7 August 2019 (UTC)

import coordinates from hiwiki[edit]

SELECT (COUNT(*) as ?count) 
	?item wdt:P31 wd:Q56436498 .
    [] schema:about ?item  ; schema:isPartOf <> 
    MINUS { ?item wdt:P625 [] }                      

Try it!

Aliva Sahoo
Ananth subray
Arjun Nemani
Godric ki Kothri
Gopala Krishna A
Gurbakhshish chand
J ansari
Jnanaranjan sahu
Kartik Chauhan
Manavpreet Kaur
Rajeeb Dutta
Mohamed Mahir
Naga sai sravanth
Pavan santhosh.s
Obaid Raza
Pranayraj Vangari
Saisumanth Javvaji
Satdeep Gill
Satpal Dandiwal
Stalinjeet Brar
आर्या जोशी
सुशान्त देवळेकर
सुबोध कुलकर्णी
हिंदुस्थान वासी
Kartik chauhan
Khalid khan
Owen Stephens
R Ashwani Banjan Murmu
Ramjit Tudu
Lokesha kunchadka
Ilaveyil riswan
Abijith k.a
Raju Jangid
Nikhil VJ, Pune
Sumit Surai
ZI Jony
Sriram vishnudatta
Meenakshi nandhini
Sai K shanmugam
Path slopu
Bhuvana Meenakshi
Soorya Hebbar (talk) 11:20, 1 December 2019 (UTC)
Pictogram voting comment.svg Notified participants of WikiProject India

Items like Q12431322 that came up on Wikidata:WikiProject Random has a sitelink to hiwiki. That article includes an infobox with coordinates (hi:Template:Infobox Indian Jurisdictions). Maybe these could be imported systematically: some 20,000 items have a sitelink to hiwiki. Overall, there are more than 120,000 such items without coordinates. --- Jura 11:33, 7 August 2019 (UTC)

I made my bot import coordinates from the infobox but at some point there is a lot of duplicate ones. Not sure if we can proceed, those coordinates will likely have to be deleted. --Matěj Suchánek (talk) 12:46, 7 August 2019 (UTC)
@Matěj Suchánek: thanks for looking into this.
That's bad. Shall I revert them or will you do it?
I pinged WP India above, maybe a contributor active at hiwiki can fix them there too. --- Jura 12:49, 7 August 2019 (UTC)
I'm removing them. --Matěj Suchánek (talk) 13:01, 7 August 2019 (UTC)
@Matěj Suchánek: Once phab:T200968 is deployed and it is possible to add geodata from doi:10.7927/H4CN71ZJ, one can derive better coordinates for such villages based on a Indian census area code (2001) (P3213) concordance (this concordance I will take the time to add manually). In the interim, however, it is probably fine to remove any possibly duplicate coordinates from previous imports, and @Jura1: to cease importing coordinates from hiwiki. Mahir256 (talk) 16:10, 7 August 2019 (UTC)
SELECT ?coor (COUNT(*) as ?count) 
	?item wdt:P31 wd:Q56436498 .
    [] schema:about ?item  ; schema:isPartOf <> .
    ?item wdt:P625 ?coor .                  
GROuP BY ?coor
HAVING (?count > 1)

Try it!

The above might help identifying coordinates users imported from hiwiki previously. --- Jura 22:05, 7 August 2019 (UTC)

50000 items with description "village in India"@en[edit]

SELECT ?item ?itemLabel ?itemDescription
	?item wdt:P31 wd:Q56436498 ; schema:description "village in India"@en
	SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }

Try it!

Similar to Q12431322 that came up on Wikidata:WikiProject Random, descriptions for these some 50,000 items could probably be improved.

For Q12431322, I changed this to "village in Patna district, Bihar, India" based on P131 on the item. --- Jura 11:49, 7 August 2019 (UTC)

A better description would list the tehsil/block that the village is located in, and for many such links on hiwiki this is given in the name of the page (for the example you cite, "दानापुर (पटना)" refers to Q12433539). This would of course require that the P131s of those villages be adjusted accordingly. Mahir256 (talk) 16:19, 7 August 2019 (UTC)
  • To start, I think one could also just add the state. This can be refined later. --- Jura 21:48, 7 August 2019 (UTC)
    • A short query that describes what the new descriptions can be, along with the appropriate QuickStatements to execute the change. If it looks okay, I can make this change :) TheFireBender (talk) 06:05, 20 August 2019 (UTC)
      • Thanks. Looks good. --- Jura 08:26, 22 August 2019 (UTC)

database reports on c:Category:Paintings without Wikidata item[edit]

To help determine for which files it could be worth creating new items, it would be helpful to have two database reports detailing the number of files

  • per artist
  • per collection

This could be done based on other categories on these files or based on the templates present on these files.

Ideally the report would be updated regularly. --- Jura 11:59, 7 August 2019 (UTC)

Thanks! Related, it would be nice to know which files on Commons under museum/artist categories have artwork templates and no wikidata items. In my experience these files are often not picked up because they lack categories (part of an institution upload, or a file transferred to Commons from a specific artist page on Wikipedia). Of course there is no way to sprt paintings from other artworks, but generally if the thing is in a museum, it is notable enough for Wikidata anyway. Jane023 (talk) 12:47, 7 August 2019 (UTC)

weekly import of new articles[edit]

To avoid Wikidata getting stale, it would be interesting to import new papers on a weekly basis. Maybe with a one week delay. This for repositories where this can be done.

@Daniel Mietchen: --- Jura 12:16, 7 August 2019 (UTC)

I'd certainly like to see this tested, e.g. for these two use cases:
  2. all of PubMed Central, i.e. articles having a PMCID (P932), which point to a full text available from PMC.
--Daniel Mietchen (talk) 03:08, 23 August 2019 (UTC)
The disadvantage of skipping some might be that one wouldn't now if it's complete or not. --- Jura 17:00, 25 August 2019 (UTC)

Change rank in newest value of P1082[edit]

Request date: 12 August 2019, by: Amadalvarez (talkcontribslogs)

Link to discussions justifying the request
  • First to ask for a bot, I'd like to know if it already exists as regular cleanning task.

If not, you can see User_talk:Underlying_lk#Adding_població_(P1082)

Task description

The present or newest value of population (P1082) and some other properties must be the only one ranked as preferred. Some uploads made by quickstatement don't have set it and, additionally, don't change the "old preferred" to normal rank, because the QS doesn't allow this. Obviously, it also happens when someone forget to do in manual data entry, but the massives uploads have a wide affectation.

The task to do is:

  • set preferred rank to the newest value (depending on P585) and set to normal rank any other that had preferred rank.

Thanks, Amadalvarez (talk) 06:38, 12 August 2019 (UTC)

Licence of data to import (if relevant)

Request process

replace "village in Russia" description[edit]

Thanks to WikiProject Random, I came across Q4438518 with the description "village in Russia". It seems that there are many more items with the same description [4] that could use an improvement. --- Jura 13:44, 20 August 2019 (UTC)

What improvement do you assume? To add some more narrow division in English description? --Infovarius (talk) 21:32, 21 August 2019 (UTC)
At Q4438518, I changed it to "village in Stupinsky District, Moscow Oblast, Russia", but already "village in Moscow Oblast, Russia" would have been better. --- Jura 08:27, 22 August 2019 (UTC)

Import taxon author (P405) and year of taxon name publication (P574) qualifiers for taxon name (P225)[edit]

Similar to this edit adding qualifiers to taxon name (P225), maybe there is a good way to import year of taxon name publication (P574)-values or even taxon author (P405) from enwiki (or another WP). --- Jura 09:10, 23 August 2019 (UTC)

Add numeric id to Twitter username (P2002)[edit]

Please see the discussion at Property_talk:P2002#Renamed_accounts?. As Twitter username (P2002) is unstable, please add Twitter user numeric ID (P6552) as qualifier to all P2002-statements.

If, at the same time, you could gather the number of subscribers and list them for accounts with >500,000 that would be most helpful. --- Jura 12:45, 24 August 2019 (UTC)

cleanup of English descriptions for people[edit]

Special:Search/"born:" "died:" haswbstatement:P31=Q5 finds some 10,000 items, many of which could use a cleanup of the description. --- Jura 14:33, 29 August 2019 (UTC)

Remove statements of P2035[edit]

Request date: 30 August 2019, by: Nomen ad hoc (talkcontribslogs)

Link to discussions justifying the request
Task description

Deprecated property, moved LinkedIn personal profile ID (P6634). Needs to be removed, then deleted. (Oddly it wasn't done after the Pigsonthewing's PfD request has been closed...)

Licence of data to import (if relevant)

Request process

Import from Polish Wikipedia[edit]

Request date: 30 August 2019, by: Recherchedienst (talkcontribslogs)

Link to discussions justifying the request
  • Undisputed import from Wikipedia project.
Task description
Licence of data to import (if relevant)
  • You could try Petscan [6] --- Jura 23:00, 30 August 2019 (UTC)
    • ✓ OK Done. Powerful tool, but the references are missing.[7] --Recherchedienst (talk) 00:47, 31 August 2019 (UTC)
Request process


Request date: 1 September 2019, by: J 1982 (talkcontribslogs)

Link to discussions justifying the request

No need, this is a standard.

Task description

Move all categories described in Swedish as "kategorisida" into "Wikimedia-kategori".


I also call for all categories described in Swedish as "Wikipedia-kategori" to be moved into "Wikimedia-kategori". It is also a standard, so we don't need link to discussions justifying the request.

I added those as well. It is running in bits and pieces, there are 4.4 million Wikimedia-categories, and the SparQL-query runs fine on PAWS, but it needs a restart once a day or so. On my Linux machine the query gives Killed after 10 minutes, usually that means that there is a memory issue due to too many results in the query. An alternative by downloading the items to a CSV-file should resolve this, but for the time that didn't cause a start-to-end run. But it's still running (update 16 sep), and it's still updating items. Edoderoo (talk) 07:35, 6 September 2019 (UTC)
  • In your PAWS account main folder, you can add a file "" (not .ipynb!) with content put_throttle=1 to edit at 1/s (after re-starting your server).
  • That said, there are admins who do not like larger amounts of PAWS edits under your regular account. I did receive a block for doing so in the past, thus you should consider to run it under your bot account.
  • Assuming your pywikibot-code linked above is what you are actually using, I suggest to tweak the query to make it more efficient: with
    SELECT DISTINCT ?item WHERE { VALUES ?description { 'kategorisida'@sv 'wikipedia-kategori'@sv 'wikipedia kategori'@sv 'kategori'@sv } . ?item wdt:P31 wd:Q4167836; schema:description ?description . }
    Try it! , you only load the ~166k affected items to be edited. At 1/s, you'd be finishing the task within 2 days.
MisterSynergy (talk) 17:06, 16 September 2019 (UTC)
Maybe I'll try to tweak the query and run it with my bot account on Ubuntu, else it will also finish in like 20 days. There is no hurry for this query to finish, but it's nice to know that you can speed it up in the way you explain. Edoderoo (talk) 21:15, 16 September 2019 (UTC)
Request process
  • Accepted by (Edoderoo (talk) 09:54, 3 September 2019 (UTC)) and under process
  • Has been running: python script in PAWS
  • Task completed (20:58, 20 September 2019 (UTC))

It finished today. A few items with issues are left, there might be some merge candidates inbetween. Edoderoo (talk) 20:58, 20 September 2019 (UTC)

Import Babelnet IDs[edit]

Request date: 10 September 2019, by: Florentyna (talkcontribslogs)

Link to discussions justifying the request
Task description

Is there a way to import Babelnet IDs P2581 in general by Bot to Wikidata (in particular for everything related to sport badminton (P641=Q7291)? Example: Under sources in [8] this item is connected to Wikidata, but vice versa not. As limitation it can be helpful, that not everything from Wikidata one can found on Babelnet, but with an existing lemma in ENWIKI or DEWIKI a Babelnet ID should exist.


I don't see a way to find the Babelnet pages "automatically", other then doing a manual search. On the en-wiki and de-wiki articles I do not see Babelnet links. Did I miss them? Edoderoo (talk) 18:48, 17 September 2019 (UTC)

Thank you very much for your answer. There are no links to Babelnet from enwiki or dewiki. The only thing what I found out is, that mostly an item on Babelnet exists when an article in a large wiki like dewiki or enwiki exists. Example: 1998 Swedish Badminton Championships does not exist on Babelnet (no Wikipedia article available, but a Wikidata item exists as Q27020330 since 2016). In comparison, 2015 Swedish Badminton Championships exists on Babelnet (17897575n), because a Wikipedia article exists (Q19057318). The Wikidata item is directly mentioned on Babelnet with its ID (what is probably the only one connection between the both IDs). Now the reverse connection from WD to Babelnet is wanted. Florentyna (talk) 19:08, 17 September 2019 (UTC)
I made some more investigations. It is in general possible to download the BabelNet indices from there after login (Size: 29Gb, Download the 3rd party CC-BY-SA resources in lucene format Size: 17G). And there are different APIs ( Hope that helps. @Edoderoo: Florentyna (talk) 07:58, 20 September 2019 (UTC)

Replace p:P3602/pq:P4100 with p:P3602/pq:P102[edit]

SELECT ?item ?itemLabel ?election ?electionLabel ?representing ?representingLabel 
  ?item p:P3602 [ps:P3602 ?election; pq:P4100 ?representing] .
  ?election wdt:P17 wd:Q16 .
  SERVICE wikibase:label { bd:serviceParam wikibase:language 'en' }

Try it!

Following the discussion at Wikidata:Requests for permissions/Bot/EbeBot, please replace in the above the qualifier P4100 with P102. --- Jura 04:29, 15 September 2019 (UTC)

Just to be sure, you mean like this? Edoderoo (talk) 18:40, 17 September 2019 (UTC)
@Edoderoo:: thanks for looking into this. Statements would be just with candidacy in election (P3602) for Canadians, not position held (P39). --- Jura 09:06, 18 September 2019 (UTC)

Replace typo דיפולמט in correct word דיפלומט[edit]

Request date: 15 September 2019, by: Uziel302 (talkcontribslogs)

Replace typo דיפולמט in correct word דיפלומט.

8000 occurences.

Can you give a few more details? In which language is this occurance? And is this a description? Or a label, or an alias? Edoderoo (talk) 05:42, 16 September 2019 (UTC)
Edoderoo, look at my edit log, I managed to do it with quickstatements.Uziel302 (talk) 10:49, 16 September 2019 (UTC)
Request process

Move student (P802) to significant person (P3342)[edit]

Request date: 28 September 2019, by: Edoderoo (talkcontribslogs)

Link to discussions justifying the request
Task description

Move student (P802) to significant person (P3342) including all sources and qualifiers. Also note the remark data moved will incorporate a qualifier object has role (P3831) = pupil (Q48942) or similar, because P3342 is a generic property and need the person role which is implicit in the P802.

Licence of data to import (if relevant)

Not relevant


I can do this request myself, but I want here to get additional attention to this request.

  • Seems terribly redundant. Why delete it if you use another property instead? Tend to Symbol oppose vote.svg Oppose --- Jura 15:52, 28 September 2019 (UTC)
Request process

Import solar eclipses data[edit]

Just came across w:Special:PrefixIndex/Module:Solar_eclipse/. The data there is displayed on items like w:Solar_eclipse_of_December_14,_2020.

Times and duration, maybe more, could be imported. Sample for time: [9] --- Jura 10:53, 29 September 2019 (UTC)

Film infobox import[edit]

There are some mappings on Wikidata:WikiProject_Movies/Tools#Wikipedia_infobox_mapping.

In the past, we had regular imports from Wikipedias for most of them with Harvesttemplates. As the tool tends to timeout on many of them now, imports are somewhat limited. The py-framework might be able to do the same imports. --- Jura 10:02, 1 October 2019 (UTC)

BTW a workaround I found for Haresttemplates is to adjust the constraints. --- Jura 13:02, 4 November 2019 (UTC)

Golf rankings[edit]

Request date: 11 October 2019, by: MSGJ (talkcontribslogs)

Link to discussions justifying the request
Task description

Please import data in en:Template:Infobox golfer/highest ranking to Wikidata. For example:

|Wayne Grady=25 (17 March 1991)<!--OWGRid=501--><ref name=w199111>{{cite web |url= |title=Week 11 1991 Ending 17 Mar 1991 |publisher=[[OWGR]] |accessdate=25 September 2019 |format=pdf}}</ref><!--dmy-->

is converted as follows:

Additionally there are some which have qualifier duration (P2047). I will give an example of one of these later.


Request process

✓ Done I wrote a script to create a QuickStatments query, you can see the query running here --SixTwoEight (talk) 12:11, 12 October 2019 (UTC)

@SixTwoEight: thank you very much indeed. Sorry I did not get round to giving an example of the duration. Here is one:
|Nick Price=[[List of world number one male golfers|1]] (14 August 1994)<!--OWGRid=452--><ref name=w199433>{{cite web |url= |title=Week 33 1994 Ending 14 Aug 1994 |publisher=[[OWGR]] |accessdate=20 December 2018 |format=pdf}}</ref><br>(44 weeks)<!--dmy-->

For some reason this duration is only specified for those who reached number 1 ranking. Is there a quick way to add these qualifiers? If not, don't worry, I will do it manually. And thanks again! MSGJ (talk) 16:04, 13 October 2019 (UTC)
✓ Done @MSGJ: Batch 1 and batch 2 SixTwoEight (talk) 19:25, 13 October 2019 (UTC)

Thanks again 628! MSGJ (talk) 07:10, 14 October 2019 (UTC)

Bot account for Recent Changes API[edit]

Request date: 17 October 2019, by: DD063520 (talkcontribslogs)

Link to discussions justifying the request
Task description

I would like to have a bot account to increase the rclimit of the Recent Changes API It is limited to 500 and can go until 5000. I'm interested in monitoring changes in Wikidata to import them in a Wikibase.

Licence of data to import (if relevant)

Requests for bot flags should be on Wikidata:Requests_for_permissions/Bot --SixTwoEight (talk) 22:38, 17 October 2019 (UTC).

Request process

Upload triples to Wikidata[edit]

Request date: 1 November 2019, by: JLuzc (talkcontribslogs)

Link to discussions justifying the request

I discussed uploading triples to Wikidata that were extracted from Wikipedia, specifically from HTML tables. The final suggestion received from @ChristianKl: is in Topic:V9tqy53bfliigaly

Task description

Uploading a list of 45K triples. The CSV file is available in Drive File, it contains [subject, predicate, object, wikipedia_source_page, precision_score]

Licence of data to import (if relevant)

Request process

Fix ranks for multiple "population"-statements (P1082)[edit]

Some time ago, at user set ranks on all population (P1082)-statements numbers for a series of cities from "preferred" to normal. See discussion at Wikidata:Project_chat/Archive/2019/08#Preferred_status_for_population.

Unfortunately, I didn't revert it back then. The result is that queries using population numbers with the "wdt:P1082" triple tend to get out of hand.

For places with 3 or more population values, please set the one with the most recent date to preferred rank.

@DerFussi, Jklamo, Serhio_Magpie, Vojtěch_Dostál: who participated in the discussion. --- Jura 13:10, 4 November 2019 (UTC)

Thank you Jura for bringing this up. Note: Please don't change ranks in cases when the most recent statement is deprecated (it can be that the most up-to-date statement is not suitable for some reason). Also, please don't change anything in case there are two most recent population numbers with the same date qualifier (it can be that their determination method (P459) differs and we're unable to automatically recognize the more correct one). Also, why the limitation to places with 3 or more values? Personally I'd do it for places with 2 values as well. Vojtěch Dostál (talk) 13:26, 4 November 2019 (UTC)
If you can do it safely for 2 as well, please do. The main problem now is that we can get dozens of values .. --- Jura 13:30, 4 November 2019 (UTC)
Really good news. Thanks. Got some complaints about huge infoboxes with a long list of pupulation values. -- DerFussi 16:06, 4 November 2019 (UTC)

Fix local dialing code (P473) wrongly inserted[edit]

Request date: 7 November 2019, by: Andyrom75 (talkcontribslogs)

Task description

Several entities has a wrong value for the local dialing code (P473) according to the format as a regular expression (P1793) specified in it: [\d\- ]+, as clarified "excluded, such as: ,/;()+"

Typical examples of wrong values, easily identified are the following two:

  1. local dialing code (P473) that includes at the beginning the country calling code (P474)
  2. local dialing code (P473) that include at the beginning the "optional" zero
  • Case 1 can be checked looking for "+", when present, should be compared with the relevant country calling code (P474) and if matched, it should be removed
  • Case 2 can be checked looking for "(" and ")" with zeros inside. If matched it should be removed

Request process

Automated addition of WikiJournal metadata to Wikidata[edit]

Request date: 9 November 2019, by: Evolution and evolvability (talkcontribslogs)

Currently, a lot of info of each WikiJournal article is stored in the v:template:article_info (essentially in infoboxes). It'd be ideal to be able to easily synchronise this over to wikidata (list of submitted articles ; list of published articles). We used to import metadata for published articles from crossref to wikidata via sourcemd, but that not working currently, and also crossref lacks a lot of useful metadata. Would it be possible to synchronise this so that it's imported into wikidata, then transcluded back over to the wikijournal page? This should also help to automate the tracking table that currently has to be updated manually. It'd similarly be useful to add editors from this page to wikidata (either to the journal item or to the item for the person as appropriate).

Link to discussions justifying the request
Task description
  1. When |submitted= in v:template:article_info is set on a WikiJournal article (category)
  2. When v:template:review added on article talkpage (example)
  3. If possible, when v:template:response is added inside v:template:review
    • add some value to item for review to indicate.
  4. When v:template:WikiJournal_editor_summary added to page transcluded into v:WikiJournal_User_Group/Editors
Licence of data to import (if relevant)



Is this an ordinary bot request. As far as I understand, bot requests are for users that already have a working bot and wants it to have permissions that. e.g., allow for large-scale editing. That said, it is good to have WikiJournal information in Wikidata. At one point I believe I had all articles constructed in Wikidata. Note that the old version of sourcemd is working from Do not underestimate the amount of manual annotation that is necessary, e.g., for author item construction and/or disambiguation, topic annotation and addition of non-crossfref citation information. Is it correct that you want separate item for each peer review? It is unusual to have Wikidata items for talk pages, so perhaps this issue should be brought up on Wikidata discussion forum to ensure consensus on notability. — Finn Årup Nielsen (fnielsen) (talk) 16:15, 20 November 2019 (UTC)

@Fnielsen: Aha, Apologies, I had thought this was a location to request assistance in building a bot (similar to Wikidata:Request_a_query). Do you have an idea for where might be more logical? Although manual refinement may be necessary (though hopefully can be reduced via requiring either ORCID or QID to be provided for each author/editor/reviewer). It is correct that I think an item for each peer review would be ideal (equivalent to Q58285151), however the backup option would be to include the info in the article's item using reviewed by (P4032) with a lot of qualifiers. For many of these (e.g. copying of metadata from article to wikidata) the ultimate long-term solution would be to be able to edit wikidata direct from the wikiversity/wikijournal editing interface, but that seems a long way off. T.Shafee(evo&evo) (talk) 22:59, 20 November 2019 (UTC)
Request process

Update P1240 and P1250 anually[edit]

Request date: 17 November 2019, by: Tomastvivlaren (talkcontribslogs)

Link to discussions justifying the request
Task description
Licence of data to import (if relevant)

Please include a time qualifier (point in time (P585)). The most recent import to P1240 did not include time info, and because of that, I designed the infoboxes to only show the last value in the list. So please put the new value in the end of the list. In the future, perhaps the infobox should show the full time series graphically. In case some value is deleted from the list, please tell me. There are some vague plans on a common Nordic list that hopefully will replace the Danish and Norweighan lists one day.Tomastvivlaren (talk) 19:22, 17 November 2019 (UTC)

Request process

✓ Done by user:Larske. Tomastvivlaren (talk) 18:54, 22 November 2019 (UTC)

Latin descriptions[edit]

Request date: 22 November 2019, by: 1234qwer1234qwer4 (talkcontribslogs)

Link to discussions justifying the request
no need for that, media is a o-declension neuter plural and has a genitive ending with -ōrum.
Task description

Replace Latin description “categoria Vicimediae” by “categoria Vicimediorum”


Request process