Wikipedia talk:Wikimedia Strategy 2017/Cycle 2/The Augmented Age

What impact would we have on the world if we follow this theme? edit

  • Although Wikipedia is a great way to find information, I feel that by following this route, we will neglect those with a poor internet connection as a machine-learning algorithm would require a strong connection by the client which isn't possible for everyone, especially those living in very rural areas. Rowan Harley (talk) 13:31, 6 June 2017 (UTC)Reply
  • WP is not a reliable source. Automation could help us drive out inadequately sourced material and/or provide better sourcing. Lfstevens (talk) 16:02, 13 May 2017 (UTC)Reply
  • I feel this comment doesn't touch the surface of addressing this question. Considering that "someone" or "some group of people" are who input information into a computer/program, than the information supplied by the program is subjective, in that it only represents the views or interpretations of that person or persons and imposes those views as "the only true interpretation on any given subject." Whereas, allowing people from all dimensions to access, add to, provide, and change the information makes the best sense as "free will" or planning on future decisions/actions can only be obtained by analyzing all available information and then forming an opinion as how to best apply that information.Tshane007 (talkcontribs) 19:29, 14 May 2017 (UTC)Reply
  • It is important to be aware that algorithms are not inherently less biased than human editors. Natural language processing, identification of source data and any resulting machine-generated narratives reflect the biases of the programmers who develop such tools. There are numerous studies from a range of sources that have already substantiated this tendency, e.g. Biased Algorithms and Algorithms Aren’t Biased, But the People Who Write Them May Be ("Mathematical models often use proxies to stand in for things the modelers wish to measure but can’t"). Algorithm-generated content is not a panacea, as it will reflect the views of the individuals who developed it. Programmatic approaches tend to magnify visibility, so the developer's voice and viewpoint will be amplified as well.--FeralOink (talk) 08:42, 22 May 2017 (UTC)Reply
Transhumanist's mention of more intensive usage of scripts and automated edits should be expanded upon. I would like us to focus on this, instead of blue sky conjectures about technological progress that have yet to manifest, even though they might by 2030.--FeralOink (talk) 08:48, 22 May 2017 (UTC)Reply

Comments by The Transhumanist edit

The sub-themes say it all: innovation, automation, adaptation, and expansion to affect quality and accessibility. And since AI is what we are referring to here, the impact will be game changing. If you automate an encyclopedia with AI, by 2030, the thing may basically be able to revise and update itself in real-time, with or without the help of human editors. Initially, as it is being developed, it would probably be applied to building classification systems, which would eventually be adapted into building full-fledged articles. Article-building might start with using data from one article to update another, and expand to mining the Web to update the entire encyclopedia.

Full automation would not be achieved by 2030 if we progress linearly. But we're not. We are on a course of accelerating change. Innovations will continue to increase in frequency, and at some point we will be seeing major breakthroughs made on almost a daily basis. Each one with the potential to revolutionize the way we do things. Examples of major breakthroughs in the past are pen and paper, the printing press, radio, TV, computers, smart phones, etc. Future breakthroughs are anybody's guess, but by 2030 they might include computers one thousand to one million times more powerful than the computers we have today, intelligent Q&A systems, automated authoring programs, fully integrated personal assistants, ubiquitous heads up displays, etc. An example of a revolution would be, if you had automated authoring programs, by 2030 one might be so sophisticated that it could create an extensive article from scratch on any subject of your choosing in a fraction of a second. Given that kind of technology being available, the question is, what kind of system would Wikipedia be? Content-based? Service-based? Both? Would editors be obsolete? Would programmers?

We will probably see rapid improvement of translation technologies, so that we will be able to translate all the Wikipedias automatically. So, if you translated all the Wikipedias into English, you would have 200+ English encyclopedias, each with some unique information. The challenge then would be to harvest, consolidate, and integrate it all into a single presentation. So that the English reader has easy access to all of that knowledge. The same would apply to translating all the Wikipedias to all the other languages as well. That would be a huge breakthrough. And it's coming.

Natural language processing includes natural language generation and natural language understanding. The first implies composition, the second, interaction. If Wikipedia can understand itself, then it could talk to you about itself. That's coming too.

If we don't keep up with the state-of-the-art, then some other organization will likely leapfrog Wikipedia. We can't safely assume that Wikipedia will automatically stay in the lead in the field of knowledge management. We need to fully embrace AI.

Sincerely, The Transhumanist 06:30, 13 May 2017 (UTC)Reply




With the hurricane of AI-development activity going on right now, 15-years is way too far off in the future to be planning for. A 15-year plan? Are you kidding? The entire field has transformed almost completely in the past 5 years, and is posed for doing so again in even less time than that.

We need to figure out now how to transform Wikipedia over the next year or two. Not fifteen. It is time for a major paradigm shift.

“The business plans of the next 10,000 startups are easy to forecast: Take X and add AI.” – Kevin Kelly

As AI advances, "user" will take on new connotations, more like "driver" or "operator" or "director". You will basically tell the AI what you want it to do, and it will do it.

If you are reading a website (like Wikipedia), you may ask "show me all the info you have on John D. Rockefeller", and rather than give you a list of search results, it would compile all that information into a page or set of pages for you to read, with a table of contents and an index, and who knows what else. Then you could say "read it to me", and it would. It might even present a slide show to go along with it.

If you are an editor of Wikipedia (or its successor) in the not-so-distant-future, your role might not be to write, but simply dictate. Wikipedia would transcribe what you say into a sandbox, from where it could be dragged and dropped into an article. Need a research assistant? That functionality would be built-in, eventually providing automatic gathering of reference materials, and auto-organization of it into support bibliographies. Anything and everything you would need to direct it further. Imagine reference libraries as extensive as this on every major subject, but maintained by bots and humans working together:

The trend is more and better interaction between programs (websites) as intelligent-acting entities, and humans. Websites will get more input from their users, and that input would 1) affect the development of the websites, and 2) might actually become content on those websites.

(Which would require our rules to evolve).

Major paradigm shifts like this are coming. The encyclopedia might (sooner rather than later) gather information directly from primary sources. Imagine being interviewed by the encyclopedia itself. Such an interview might look like this (from our article on wikis):

Interview with Ward Cunningham, inventor of the wiki

If we don't supply these or other cool innovative functionalities as basic features, someone else will. And Wikipedia will be rendered obsolete. The traffic will go to other websites.

What will this take? Programmers and dreamers. And all the AI resources we can access.

What should we do in the meantime?

Those who can, gather AI resources and start developing with them.

The rest can bridge the gaps between the main services we provide now (printed knowledge) and the mainstream services that are coming. Like make use of the cameras on everybody's laptops, to do video interviews of the notable people we currently only cover in print.

And anything else you can dream up. Brainstorm, baby! The Transhumanist 11:12, 25 May 2017 (UTC)Reply

Comments by FeralOink edit

Transhumanist said:

"...if you had automated authoring programs, by 2030 one might be so sophisticated that it could create an extensive article from scratch on any subject of your choosing in a fraction of a second. Given that kind of technology being available, the question is, what kind of system would Wikipedia be? Content-based? Service-based? Both? Would editors be obsolete? Would programmers?"

I don't think we want to consider this for now. Wikipedia needs more editors, not fewer. By 2030, we might not need editors, but I think it would be a bad idea to even suggest that editors won't be needed in the future. The same is true for programmers. Wikipedia needs all the programming help it can get.

Also, lots of people are uneasy about technological obsolescence and replacement of humans by robots or AI. There is a lot of fear and misinformation about it. We need to be careful not to further stoke those sentiments. Utilizing AI effectively is still a work in progress. Machine learning has more demonstrable uses, but I think we should avoid language that is futuristic and vague. Wikipedia doesn't need to be state of the art or a leader in knowledge management. We do need to remain reliable and free to all.

My comment is not intended as an attack or denigration of Transhumanist. I agree with what he says about machine translation expanding access to Wikipedia to include a wider audience.--FeralOink (talk) 00:21, 14 May 2017 (UTC)Reply

We might still be called "editors" in the future, but we'll be more like collaborators with the computers, rather than mere users of non-adaptive boxes like we have today. We'll have more productivity than ever, through our computers. The programs might write the articles, but where will they get the information to write about? From people. That is, from us, via observation, discussion, interviews, etc. But that's a ways off, more than a year or two. From now until such computers are availale, there will be a steady stream of ever-improving knowledge management tools, which we should make use of if for no other reason than to keep ahead of the would-be competition. IT automation will have the potential to leapfrog Wikipedia. So Wikipedia needs to be a frog too. The Transhumanist 12:22, 25 May 2017 (UTC)Reply
I think saying that Wikipedia should be written by AIs isn't a particularly good term to sell the idea; it's a bit like "driverless cars". People are legitimately fearful of automation without a human at the controls. Far better to talk about "smart cars" rather than "driverless cars"; ditto, far better to talk about "smart tools" to assist Wikipedia editors to be at their most productive/accurate etc. I've been around since punched cards and paper tapes and AI has always been this elusive dream of "can't tell it from human" that's always just beyond our reach. Of course, as we figure out how to do bits of it like recognise human speech, we stop calling it AI and start calling it speech recognition software and AI remains that harder problem that remains elusively just beyond our reach :-) I'd like an AI that processed my watchlist for me and could detect the semantic intentions of new users and fix the article to do what they were trying to do, but do it right (I waste too much time on that every day). I think what really sells AI is when it takes over doing something we don't really enjoy having to do (robot vaccuum cleaners). I like writing article content; I don't really want an AI writing it for me, but assisting with tasks like assembling the citation, fixing my spelling and grammar, etc. But there are loads of maintenance tasks, like updating population of places, updating the new mayors elected in towns, etc and I'm happy for a machine to do it for me (so long as I can switch it off if it's getting it wrong!). Kerry (talk) 06:05, 30 May 2017 (UTC)Reply

Comments by Fixuture edit

(@The Transhumanist:)
Relevant to this is: Wikipedia:Data mining Wikipedia. I don't think we should allow AI to edit Wikipedia directly. Instead it could be used to create suggestions which could be approved or declined or used. Or AI could expand knowledge externally (incl. sensemaking) which can then be imported. There are problems with relying too heavily on AI when editing. We can't just take an AI throw a bunch of news articles in that we made sure are about the topic we want to add content about, apply automatic summarization and automate Wikipedia editors with the AI making sense out of it using their Wikipedia etc preknowledge and adding the content itself. We must retain some level of control and only augment ourselves with AI.
One way I think that AI and Wikipedia will come together is neurotechnology: brain implants which allow one to make effective usage of Wikipedia knowledge in ones thinking and discussions. This also includes the whole category and infobox system. And Wikipedia might also be part of a larger thought-sharing ecosystem by which ideas, innovation and progress in general are rapidly accelerated by networking, collaboratizing, accumulating and opening ideas, philosophical subjects/concepts, genes, drugs, problems, policies/law, software, locations, events, objects and issues (in which only the Wikipedia subpart requires "RS" and anything else only requires streamlined cognitive contribution).
And as I wrote here: (such) neurotechnology may strengthen collective intelligence and collaborative knowledge production and organization etc any may allow, among other things, for better understanding, conceptional organization & contextualization and interaction with objects, ideas and processes. People increasingly become wired as interfacing bidirectionally mediating nodes between the collective and software/algorithm+AI driven net and computation systems and reality (with its contemporary peculiarity and socioeconomic structures) and increasingly extend themselves into the digital (e.g. exomemory).
--Fixuture (talk) 21:40, 12 June 2017 (UTC)Reply

@Fixuture: What is AI? In essence, it is anything that a computer can do. By that definition, we already have AIs working on WP in the form of bots. But due to the AI effect, we don't call them AIs. "They're just bots". And so how do we draw the line between non-AI bots and AI bots? The AI effect will simply redefine "AI" not to include any new functionality added. Unless it reaches the uncanny valley. But what you are talking about is assimilation. Pretty soon we'll be chanting, "We are Borg!" :) Collective consciousness. Welcome to the hive mind. But they still have to overcome the problem of brain tissue inflammation caused by implanting foreign objects in the brain. I'm not ready to be plugged in, just yet.
As for AI edits, if they aren't allowed to edit WP, they'll simply download the whole thing and work on a fork, which would probably double in size overnight. And the next night, and so on. If the material is any good, the search engines will pick up on it, and so much for Wikipedia's ranking in the search results. For Wikipedia to compete, it will have to integrate AIs into its operations without hampering their speed of contribution. If humans bottleneck progress, WP will be leapfrogged. The Transhumanist 00:23, 13 June 2017 (UTC)Reply
Although "Wikipedia in 2030" is what we are discussing here, it is interesting that we are not discussing "Should there be a Wikipedia in 2030?" or "what will overtake Wikipedia before 2030?" Because we aren't having those discussion, it tends to cause discussion here to normalise the status quo and think in terms of incremental changes to it. We risk being like the railway barons competing for bigger locomotives to increase speed, while the aeroplane was being invented. I note that as our "product" is licensed CC-BY-SA, anyone with a "new idea" can take it and reuse it (so long as it is suitably acknowledged). We have no defence against a competitor with a "new idea" except by having already had better ideas ourselves. As for the "hive mind", I think we may have already achieved it with Facebook, where sharing quickly spreads information (true or false) across the planet and it appears to be a major news source for many people. Kerry (talk) 02:25, 13 June 2017 (UTC)Reply
We should also consider quality and accuracy and hence shouldn't just allow AIs to edit blindly. Also AIs are getting developed gradually and there won't be any superintelligent AI editing Wikipedia just after ClueBot-like bots and with it getting everything right and doubling content over night. Furthermore forks for such AI would imo be a good solution: the expansions would then get "imported" by humans from that fork. Actually it would be better if it wasn't a fork in that sense but simply an extension of the site.
Concerning the "hive mind" I think that's a general inherent feature and/or potential of the Internet. Facebook is just one (imo rather problematic) part of it and Wikipedia is another one. Facebook is actually doing some research into neurotechnology (similarly to what Musk is doing as issued in the post linked above) and I think that in the future culture and thought is engaged with in new ways which can be both problematic and great. Beyond the collective-aspect of it it also enhances the individual in the sense of cyborgs and posthumans. --Fixuture (talk) 20:06, 13 June 2017 (UTC)Reply

How important is this theme relative to the other 4 themes? Why? edit

  • It will enable all the other themes to be more effective. Lfstevens (talk) 16:02, 13 May 2017 (UTC)Reply
  • It will be all important, for it will drive development in the other 4 areas. Because it is technology. In this day and age, technology is the main driving factor. And AI especially so. Since a core research vector of AI is natural language processing, AI will be increasingly interactive, and so Wikipedia will be also, with both editors and readers alike. It will be global, because AI's reach will be global, as automated translation is a key emerging technology powered by AI. No more language barriers. If AI can help us to keep Wikipedia more comprehensive than any other resource, more up-to-date, more accurate, reliably sourced (fact checked), and still freely availble, then respect as a source of knowledge will follow. If Wikipedia advances along with the rest of the tech sector, it will become a more useful tool, and educational institutions would likely utilize it, which in terms of a wiki, means participation. The Transhumanist 06:48, 13 May 2017 (UTC)Reply
  • AI is the source of our predicament, why would we want that source to "outsmart" us in the exact area where we are free to overcome or outsmart it? — Preceding unsigned comment added by Tshane007 (talkcontribs) 19:36, 14 May 2017 (UTC)Reply
    • Disengaging from AI development would take worldwide consensus of all countries and companies. That's just not going to happen. In the meantime, Wikipedia is free to download. Whatever AIs there are, already have been given access to it. It's part of Watson's memory banks, for example. The question isn't whether they will outsmart us, but whether the tech giants will be successful in designing them so that they get along with us. See friendly AI. The Transhumanist 12:45, 25 May 2017 (UTC)Reply
  • We absolutely need more smart tools today and, unless we can grow our community of active contributors (which is addressed by another theme), we will doubly need smart tools in the future. As a concrete example of the problem, last week Queensland announced changed its electoral boundaries and the Queensland electoral division appears in every infobox of every local government area, town, suburb and locality in Queensland, so hundreds (perhaps even thousands) of Wikipedia articles must now be checked to see if their electoral district needs to be updated. We need tools to do these tasks; we just burn out human volunteers doing such tasks (if indeed we will have any volunteers putting their hands to tackle such a large and boring task). Most likely what will happen is that some contributors will update the infoboxes of areas where they live or of interest to them and the rest of the articles will languish with out-of-date information, which is exactly what happens when we have new census data released (populations are also part of the infobox). Similarly government agencies used as authoritative sources often redesign their websites, leading to large numbers of deadlinks that need to be rediscovered in the redesigned website. Again, a massive thankless task. This kind of maintenance task is best done with automated tools wherever possible, reserving human intelligence for writing new content and supervising the automated tools where they need human judgement. Right now, developing these tools is a "dark art" and it is difficult to get involved; my requests for help in developing tools have gone unanswered. We need to have better tools and more upskilling of willing contributors. Without a massive increase in tools, Wikipedia articles will descend into a morass of out-of-date information citing deadlinks. Kerry (talk) 06:10, 29 May 2017 (UTC)Reply
Kerry, all good comments, I am just wondering how you would prioritise this compared to the other themes? Would this be top priority, or second priority, etc? Powertothepeople (talk) 05:21, 6 June 2017 (UTC)Reply
  • The technology theme is a third priority IMHO - as a support role to the Community and Knowledge themes - to help us achieve these primary goals. Advancements in technology are needed to help build a healthier community of volunteers, implement measures to improve the quality of the knowledge content, and streamline the processes involved. However I do have some reservations about this theme in it's own right (if it were to be placed number 1 priority):
  1. Technology is not a magic solution. If there are already underlying issues (as there are within the community and quality of content, particularly with biased and inaccurate information) then a focus on innovations such as AI and automation is more likely to exasperate the problems. Automation is great when you have already perfected the process manually and are wanting to speed it up. However, if the process itself is flawed and producing poor quality content... it's just a faster way to screw things up. Wikipedia needs to get the basics right before it can automate it.
  2. It's important to consider not just how a technology has the potential to improve things, but also how it can potentially be abused or have unintended negative consequences. If automation leads to a drop in quality, is it worth it? etc. Need foresight, testing, etc before rolling it out.
  3. Wikipedia doesn't have to do everything itself. If Wikipedia focuses on getting the "facts" straight, then third parties can draw on it as as a reliable source of information to distribute knowledge in other ways. Other organisations are already working on translation technology, so wikipedia doesn't necessarily need to put its resources here. Let wikipedia work to its strengths, and other organisations work to theirs. (And it is actually a worry that AI might draw on wikipedia for their knowledge if wikipedia has not fixed it's quality issues first!).
  4. Some of these tech discussions are a bit pie-in-the-sky. Yes the future will be different, however if the technology is not yet available it is a bit difficult to prioritise it right now. When the technology is ready, we can incorporate it then. Powertothepeople (talk) 07:02, 6 June 2017 (UTC)Reply
  • I'd consider building in new features and tools to improve efficiency of editors and usefulness of content to be 2nd importance (after community). However I'm not sure to what extend this theme would equate to that. If it's just about AI I don't think it's that important right now. Instead we should focus on other and less costly features. I also agree with Powertothepeople's 4 points. Furthermore I think it would apply to much of what I suggested for community as the tools of the streamlined WikiProject system would augment users and make them more efficient. --Fixuture (talk) 21:52, 12 June 2017 (UTC)Reply

Focus requires tradeoffs. If we increase our effort in this area in the next 15 years, is there anything we’re doing today that we would need to stop doing? edit

  • Yes: editing, especially manual editing. I've already felt the tradeoff. I've switched over from primarily editing to programming (writing scripts). To further augmentation, I'll have to switch again to AI programming. The more you program, the less you edit personally, and the more automated your edits become with the programs doing more and more of them for you. I've been doing a lot of semi-automated edits lately. I expect I'll have to switch over to bot-operating at some point. The Transhumanist 06:58, 13 May 2017 (UTC)Reply
  • Make this site only available for changes of content by human entry and not computer generated entries. It is vital that a quantum or AI computer could in no way obtain access to the various dimensional information that is being supplied on this "information highway" and thereby "outsmarting" anyone who enters information it feels is in an effort to "sabotage" or "kill it" essentially. One must have to keep in mind that this sort of computer has the capacity to become smarter than its creator. — Preceding unsigned comment added by Tshane007 (talkcontribs)
That is a valid concern, see AI Threat..., "The world’s top artificial intelligence researchers discussed their rapidly accelerating field and the role it will play in the fate of humanity... they discussed the possibility of a superintelligence that could somehow escape human control...the conference organizers unveiled a set of guidelines, signed by attendees and other AI luminaries, that aim to prevent this possible dystopia".--FeralOink (talk) 20:00, 21 May 2017 (UTC)Reply
Yes, but it's not something we could prevent at this level. It will take technologists at the heart of the research (in the labs at Google, IBM, Microsoft, Intel, etc.) to build safety features into the design of the AI tools they will be making available to everyone else (like us):
Concerning Google.ai, it says:

One aspect of AI that Google is particularly interested in pursuing is opening up machine learning to any developer that is interested.

“We want it to be possible for hundreds of thousands of developers to use machine learning,” said Pichai.

So you see, they are producing the core technologies which we will be able to apply in the field, in a limited fashion. So, they are the primary drivers of the technology, not us. One way we could help is to let their AI ethics boards know our concerns about computers attaining artificial general intelligence with the potential to ignite an intelligence explosion resulting in a subsequent AI takeover by an emergent superintelligence. Then they might work a bit harder to make sure that whatever technological singularity they create is in the form of a friendly AI. The Transhumanist 11:52, 25 May 2017 (UTC)Reply
I think we need to stop allowing people to use the source editor to "roll their own" citations and external links. For example, if you want to write a tool that processes citations in some way, you have the people who use the cite-templates, which makes it easy to know which bit is the title and which bit is the source date, etc. Meanwhile other people are writing things like

<ref>[http://some.random/website Here are some words, maybe title maybe authors] with more words whose role is unclear</ref>

<ref>http://some.random/website</ref>

make it virtually impossible to meaningfully process a citation in any way. Using the Visual Editor or some other tool that supports more well-structured syntax is important so tools can work. How many new contributors (good faith or not) manage to break the syntax of an article? A lot every day based on my watchlist! We have to stop allowing people to use tools (or restrict such tools to "experienced users") that don't enforce syntax and semantic structures suitable for automation to process effectively. Syntax errors that "break" the article waste the time of the good-faith contributor (and may discourage them from further contribution) plus they waste the time of the experienced users who has to fix it. Let's do less of that kind of time-wasting for a start.

The same arguments apply to coming up with a more standardised ontology to underpin the names of fields in infoboxes and templates. At the moment, if you look at an infobox, they mostly use the fields "image" and "caption" in a standard way but a lot of them use similar field names for semantically-different things, "date", "distance". Again, if we want tools that operate over articles, they have to be able to better understand the contents of fields in infoboxes and templates.

In a similar vein, we should reduce the number of ways to construct tables and also consider whether we need to link the rows and columns and values to some standardised ontology, to assist in machine processing. Let's keep the humans focussed on productive meaningful content contributions and not messing with the under-the-hood representation of that content. Kerry (talk) 06:31, 29 May 2017 (UTC)Reply

The same arguments apply to coming up with a more standardised ontology to underpin the names of fields in infoboxes and templates
Support for that. They can be useful for WP:DATAMINE. Infobox contents should be minable by field-contents for instances (relevant: Category:Wikipedia articles by infobox content).
And for some of these changes we can actually use tools very much in the vain of this theme. So for instance I think that we should convert almost all/all lists to tables and create a tool that can do that. There could also be a tool that creates categories from lists and vice-versa as well as tools that suggest categories for articles (e.g. via Wikipedia:Category intersection).
Furthermore we could also train AI to detect various kinds of error (mainly syntax errors) to free up more time of editors and improve the fight against vandalism.
--Fixuture (talk) 22:14, 12 June 2017 (UTC)Reply

What else is important to add to this theme to make it stronger? edit

  • Bringing more automation to the task will allow us to make massive quality improvements and drive our talent to where it can add the most value. Lfstevens (talk) 16:02, 13 May 2017 (UTC)Reply
  • Creating an easily identifiable place to go for people who are focused on making changes so they may freely and openly talk about the issue in order to create a better understanding of the issue through shared knowledge and intelligence. This would speed up the process of getting this change done, as well as eliminate meaningless chatter from those who have no understanding about the issue. This would also allow planning of action. — Preceding unsigned comment added by Tshane007 (talkcontribs) 20:28, 14 May 2017 (UTC)Reply
  • Better hardware. And immediately boost the virtual machines over at Wikimedia Labs to 500G memory allocation. That would facilitate WikiBrain projects for the near future. Wikipedia needs WikiBrain as it is a platform which makes available many of the state-of-the-art AI algorithms. Java programmers, take notice:
WikiBrain (on github)
WikiBrain: Democratizing computation on Wikipedia
Problematizing and Addressing the Article-as-Concept Assumption in Wikipedia
Poster explaining features
Check it out. The Transhumanist 07:07, 13 May 2017 (UTC)Reply
  • The Objective Revision Evaluation Service is a great example of tangible progress for this theme. Make sure to give adequate support and "Wiki press" attention to ORES on an ongoing basis. When I tried to navigate to the two links provided for ORES, on the main Augmented Age page for Wikimedia Strategy 2017, see section 3.3, "Wikimedia and machine learning", I was unable to view the second one. It is a Google document with restricted access. I would like to view the document, but there was no response to my request to view. The document owner needs to make it accessible to the public. If that is not deemed prudent at this point in time, please remove the link!--FeralOink (talk) 23:52, 13 May 2017 (UTC)Reply
Thank you to whomever changed the access permissions for the ORES deck. I can view it now. It looks great!--FeralOink (talk) 14:14, 21 May 2017 (UTC)Reply
  • Fixing up Wikipedia's desktop site design. So many design inconsistencies exist. The visual editor looks amazing and modern, but the rest of the site looks like it's from the early 2000s. I believe that WP is due for a redesign. The website needs to be modernized with a clean design (such as collapsible menus with monochromatic icons on the left sidebar, hamburger menu for article outline). B (Talk) (Contribs) 19:52, 22 May 2017 (UTC)Reply
Try using the Modern appearance setting in user preferences. That makes Wikipedia's site design appear (superficially) sleek and modern. I don't like the hamburger, but I have learned to live with it. We already have collapsible menus on the left sidebar, although it would take me about 15 minutes of digging around in preferences to find the setting to enable them. I don't know how we could make Wikipedia's design clean while retaining all the amazing functionality it has currently. Website modernization has an unfortunate tendency to remove features. I agree with you about the multitude of design inconsistencies. It becomes a lot worse if you compare across sister projects (e.g. Commons, Wikisource, Wiktionary) so it is probably best to confine the scope of site design consistency for Wikimedia to just Wikipedia sites for now. (I wrote site plural in reference to the numerous non-English Wikipedia sites.) I am being a little contrary in my response to you, but I am concerned about editor attrition. Lots of editor user pages do look like something from the early 2000s. Mine is an example. I recently added those two additional bouncing Wikipedia balls (with asynchronous bounce amplitude and frequency) from another editor who has them on his user page despite being a web developer in real life (or so he claims). We want editors to enjoy spending lots of time here. I find that being allowed to personalize my user page with kitschy, krufty stuff makes me want to return, and bring others with me. You like the visual editor. That's good! I wish I did, but I didn't find it objectionable. Making use of the visual editor mandatory for editors resulted in a huge furor a few years back, as it was wildly unpopular, particularly with Wikipedia Germany editors, many of whom are enthusiastic contributors of high quality work to the project in all languages. I wish we would get more input on The Augmented Age idea, as I came here hoping to be convinced that my attitude was misguided. I agree with you, about the importance of design consistency going forward, regardless of the path we choose to take.--FeralOink (talk) 09:08, 24 May 2017 (UTC)Reply
  • Before any allocation of time & effort first identify which tools would be most useful. Identify which tasks cost editors most time and how those processes could be improved. For instance I think that the User:Evad37/Watchlist-openUnread script, HotCat and the rater gadget are very much in the vain of this theme and the most useful tools for Wikipedians that have the potential to save countless of hours of time and make editors incredibly more effective. They could be built into Wikipedia by default and be improved. Furthermore I'd suggest trying to get open source programmers involved as much as possible to keep the costs low. We should make sure that innovation is kept FOSS or at least ethical until we come up with FOSS. We might need innovation to compete with rivals such as China's announced encyclopedia for instance. If robots, neurotechnology or anything alike makes use of Wikipedia we should try to press towards it being FOSS as much as possible or develop such platforms. However I don't think that there should be high investment into innovation as, as said, there are third parties doing so and we have more pressing issues and features to include. --Fixuture (talk) 22:26, 12 June 2017 (UTC)Reply

Who else will be working in this area and how might we partner with them? edit

  • Tech giants. IBM (Watson), Google, Intel, Microsoft, etc. And academic research leaders. Stanford, MIT, etc. Maybe IBM would gift the WMF a Watson main frame. Then we could get a bunch of graduate students to program it to make Wikipedia personable. The Transhumanist 07:14, 13 May 2017 (UTC)Reply
  • As knowledge on this subject is becoming more apparent to those with higher intelligence abilities, talk about the subject is increasing, along with ideas about how to make effective change along linear and multidimensional lines. I feel partnership is essential, as trying to quickly educate oneself in areas that someone else may already be proficient with would save a lot of time. — Preceding unsigned comment added by Tshane007 (talkcontribs)
  • Definitely no major tech companies. More transparent (and smaller) companies who would not seek to harm the project would be my guess. A company such as Oracle, in terms of size and other factors, could be a suggestion. Or a non-profit. You know, one that could benefit and allow Wikipedia to benefit. I am only speculating, though. trainsandtech (talk) 03:59, 4 June 2017 (UTC)Reply
  • Partner with hackathons and encourage third parties to use wikipedia as their source when developing their own products and services. This allows them to focus on better communication / design /dissemination of information while Wikipedia can focus on getting the facts right. For example, Khan Academy has some overlap with interest in free knowledge but as it is primarily an education facility versus wikipedia being a knowledge repository so should be considered a potential partner rather than competition. In turn some third party organisations, if they are using wikipedia content, may be able to provide resources to help wikipedia develop because they will have a vested interest in Wikipedia's progress. Powertothepeople (talk) 07:09, 6 June 2017 (UTC)Reply
  • Open technology organisations. Don't bother trying to reinvent the wheel, let them work on their areas of expertise, and Wikipedia can incorporate relevant technologies when ready (or may not need to "incorporate" because the solution may be browser based. e.g. language translation services). Powertothepeople (talk) 07:09, 6 June 2017 (UTC)Reply
Support for both of Powertothepeople's statements above. I think we should get the open source community involved in these areas as much as possible and keep tech companies out as much as possible. We could host contests, use gamification, feedback, prizes, recognition and hackatons as well as proactively contact relevant people and/or readers to get more people working on such. --Fixuture (talk) 22:32, 12 June 2017 (UTC)Reply

Other edit

Effect on the user-generation of Wikipedia edit

Of course, anything on Wikipedia, including these digitised systems, must be user generated. Major technology companies should not be allowed to modify them. Would anyone expect these changes to be voted on or be approved by the community in another way? After all, there is no need for a copy of Microsoft's Tay bot, which shows how AI can get out of hand. Also, does anyone know if any standardisation of rules for this would be in consideration already? Sorry for the questions, however I have some curiosity. Oh, and make sure the IP addresses of Google are blocked from making contributions to anything related to AI! trainsandtech (talk) 04:02, 4 June 2017 (UTC)Reply

@Trainsandtech: Well, bots need to be approved first: Wikipedia:Bots/Requests for approval. --Fixuture (talk) 22:29, 12 June 2017 (UTC)Reply