Forum Replies Created
-
AuthorPosts
-
November 2, 2018 at 9:52 pm #538
Just a minor suggestion, there are no new posts on Sutra for months, so it might be retired just as well. If there’s the need/want to migrate the posts from there to here and not loose them (wonder if they can be hidden, deleted or archived/freezed), it depends on Sutra how helpful it is for such tasks (usually a main concern before using any service – can you get off of it again, can you get your data out again and move to another place easily?), but as those posts aren’t too much, they could be manually transferred, except re-creating the identities/accounts as the originators of the posts, date of publication etc. would be the usual problem.
Whatever is done, diverging attention and scattering data/conversations is bad, but migration is expensive as well, which is the typical stuckness that can be observed everywhere, and no solutions available.
November 2, 2018 at 9:42 pm #537Hello Heidi,
one thing we can do for referring to other posts is to make links or mentions like this: #511 – you can get the post number (absolute/global? relative/local?) from the top right of the post, in light grey color. The editor also supports a
<blockquote/>
(b-quote
) like this:I cannot answer directly to specific comments made before.
Not perfect like threaded conversation, but good enough for now I guess.
Don’t worry, on the web, it’s also difficult for technical people to make things work properly, so a good part is ugly hacks, workarounds and plain ignorance. I think I too have no permissions for the site. The procedure for requesting those is to signal Josh via Facebook or Zoom or something like that, but in any case, the question would be what to change and why.
With the Living GCC Journey map, I think there is an issue with coggle.it, that the gratis or paid plan or account expired, so it’s as if the map never existed. I don’t understand the point of entering such dependencies, but that’s the way it is today apparently.
November 2, 2018 at 9:29 pm #536The recent discussion on Facebook about Facebook is deeply discouraging. I think there’s neither a game nor players.
November 2, 2018 at 9:20 pm #534Good summary, Alex, that’s what I’m saying in more detail. Note that it has nothing to do with the newness or oldness of a website: it’s a different technological approach, so it can very well make sense to build non-auto-updating websites today, or have pretty old websites into which some auto-updating was inserted here and there. Question is where it is needed. If this forum should be (ab?)used for real-time live conversation, it might not be designed to help with that too much.
October 31, 2018 at 1:57 am #526Just listened to 20181028 GCC Sunday Unblocking and especially what follows from 2:41:19 on. It’s really hard to find a reason why somebody should/would want to play this particular game. If there are questions, wants/needs or interesting topics, sure, but that’s about it. The usual throw-away communication one expects to waste in lack of better systems while being under the impression of wanting/needing to talk to somebody else.
October 31, 2018 at 1:44 am #525Would be great to do both as both are important. Do the one and not leave the other. Maybe not simultaneously. Probably helpful if it is clearly indicated which one is which, with some discipline towards both.
October 31, 2018 at 1:40 am #524Probably yes. Traditionally, the web/WWW works like this: you click links or buttons on a website and the browser asks a server machine what content is stored under it. The server sends the answer data, closes the connection and completely forgets who and that somebody asked for it. In that model, if the client received the answer and something changes later on the server side (caused by a second client or the server itself), the first client not only has no idea, but by requesting the same information a second time, parts of the previous, old answer might remain cached locally to optimize performance and bandwith. PHP is a programming language that allows developers to program a server for this model, WordPress is largely written in PHP.
Then “web 2.0” arrived with the advent of an application programming interface (API for short, basically a function/capability) called AJAX that was introduced for JavaScript in the browsers (not standardized by the way, vendors have different hacks and APIs for doing the same thing). JavaScript is a programming language for the client/browser, to make things move and dynamic locally without any server involved). That API allowed programmers to send requests to a server based on arbitrary events instead of restricting requests to the click on a link, and those requests would also not move the current browser site URL to a different one, but just allow the programmer to get the answer data and then do things with it while still remaining on the same webpage and site address. This encourages loading small snippets like the current weather, or the famous and useful “load more” (so you don’t navigate through many pages with “next” and “previous”, each page listing some 20 entries or so, but the additionally loaded entries get added to what you’re already looking at, so it gets more and more on the same page). It also allows for real-time communication with other clients by exchanging messages via the shared server and without the need to manually refresh the page and explicitly send an request to the server to ask for updates since the last retrieval, a lot of small requests if there is something new can be done in short time intervals automatically. WordPress uses AJAX too, but not too much, the main and core parts are still ordinary PHP and web 1.0. There are some advantages for that: if additional content is loaded via AJAX and integrated into the site, there is no separate website or address for it that can be bookmarked or linked to, and it requires exceptional consideration to build decent web 2.0 services to the extend that humanity might have been unable to produce a single one of them yet that’s in wider usage and not only experimential, so most websites do some crappy tricks just to get their job done, which, for example, seems to be the case for the member list here, but I’m not sure if that’s a core WordPress feature or a feature by BuddyPress.
So I can’t really answer the question and don’t want to investigate, but it pretty much looks like that there is no real-time auto-update, which is not to say that it couldn’t also be a hybrid, because some pages might not expand the page as soon as something new is found on the server and ask for it in short intervals, but instead ask frequently if there have been any updates and then show a notification that the current page is outdated, asking the user to decide if it should be reloaded or if he wants to continue with whatever he was doing at the moment, probably reading, writing or copying that should not be disturbed by something magically happening and interrupting the current activity, diverting attention/focus and what not.
October 30, 2018 at 7:42 am #523Hi Heidi,
I recently updated my profile picture with a relatively large 2,5 Megabyte PNG, that’
s via the dashboard → “Profile” in the sidebar → tab “Extended Profile” (not Gravatar on the regular “Profile” tab) → “Profile Photo” on the right. The uploaded picture gets scaled down I guess, but servers and the software can have maximum sizes for file uploads which could be a problem to run into.October 28, 2018 at 7:58 pm #493October 28, 2018 at 7:48 pm #492The contact form at the bottom of HOME seems to be not manned. All entered text in upper-case/capitalized is a minor UX thing. It may make sense for the e-mail address as those are case-insensitive, but if people can’t see what they’re typing, you might get all in lower, upper or mixed case on the receiving end.
October 28, 2018 at 7:46 pm #491This site should prominently link to globalchallengescollaboration.wordpress.com because there’s the index of other places (and back, this official (landing?) page is not mentioned in the index, imagine that!), or the content incorporated into a single site to retire the other.
October 28, 2018 at 7:32 pm #490I know some WordPress…from reading “Professional WordPress: Design and Development” by Brad Williams, David Damstra and Hal Stern but never used any of it. Instead, I automate it from the outside as a mere hosting/storage and publishing utility via its XML-RPC and JSON APIs:
- Upload from a local client application
- Submitting/publishing from another remote server
- Local client application that offers a glossary
- Local client application that downloads all posts
- Local client workflow application that downloads all posts and generates a PDF and e-book from them
Rather primitive, but works more or less. A few settings have to enable it, which is the case per default, but administrators can turn it off (which makes little sense, but OK). On wordpress.com instances, none of it works because Automattic needs to make money somehow if they’re offering you the service gratis. As I don’t like the WWW because of it’s completely stupid limitations, I also don’t like WordPress because it’s made for the WWW and therefore causes many problems. Sometimes I can work with WordPress, then it’s not too bad, widely in use and libre-freely licensed (while not ideally licensed, which would be AGPL, stuck on an old pre-network license, the GPL), but I don’t want to invest into this server package where much simpler server packages would do way better.
If there’s a shortage in re-inventing more of the same wheel, one might want to look into jrnl, by far not the only parallel effort to do more WWW things in a WWW way. Do I want to train people for using WordPress, so more people join in increasing the mess? I don’t think so, I rather want a system that doesn’t need that kind of training.
October 15, 2018 at 9:36 pm #343Oh, apologies, never mind. I didn’t put out most of my thinking in writing until recently, so this was another chance to start a material collection on that particular topic to be expanded later eventually. So here’s what I want to say basically: there are big issues with the human side as well as with the urgent, complex world problems, and also on the technological/tool side regardless of the other two. It’s not that we necessarily need more technology, the stuff we have is already quite bad despite it might not appear so. In terms of integral colors, my impression is that almost all of it is from and of the red (and not orange as one might assume/expect). I hope to get my own computing more into the other colors, have a few ideas how to do it and will share the results.
Wikipedia aims for a single, consensus-based result, therefore has some rules and policies and isn’t designed to capture everything. For people to have an article about them included, I’m not too familiar with the exact details, but I think one cannot write it himself, one can’t cite himself as the source (for reasons of bias and lack of independent verifiability), some sort of general, public interest in the individual may be required (maybe to a similar degree like the limitations on privacy for people of public interest), and coverage in various sources, something like that. In my mind, there’s no reason why an encyclopedia shouldn’t include everybody who ever lived, but that might legitimately not be the job of the Wikipedia, it’s also not an address book or social media profile page. I also wouldn’t call it censorship, because censorship is usually done by a state actor, and even if not, no website, newspaper or book publisher necessarily has to be forced to distribute statements that are not in their own interest (freedom of the press), especially if the author doesn’t pay for such a service, most of them aren’t public infrastructure or a charity, but private enterprises (excluding Leserbriefe/letters to the editor, which come in again via journalism and a healthy media/debate landscape).
When it comes to a strong focus on the mainstream, that comes from the policy/requirement that every statement must be supported by a authoritative source (citations), so that random guessing and gossip can be prevented and ideally verifiable, reproducible information ends up in the article, against which one can hardly argue and therefore becomes quite reliable.
“Openness” can mean many different things, and while the Wikipedia indeed might not be very open in some aspects, it’s open/free in the sense that you could get a full copy of the entire thing, publish it for yourself, and if you want, add every missing entry and information in your own instance of it.
Not intended to vindicate the Wikipedia, things go wrong there too as humans are involved, but to explain a little that it has a job and does that one reasonably well, arguably better than the alternatives we had before, it’s a pretty orange tool in terms of integral colors. Wikipedia is also a community, also data, also a software project (MediaWiki). Furthermore, there are alternatives that avoid those particular issues like the Federated Wiki that allows everybody to fork off their personal view of the world and the statements by others. Even if all of this isn’t seen as an advancement and benefit, one has to recognize that a lot of people manage to collaborate on a grand scale according to at least some alignment (no matter how questionable it might be), across disciplines, largely without having ever met.
October 13, 2018 at 3:22 pm #340That’s not to say that written or visual media don’t come with their own bag of downsides, but I don’t see a reason why only one one media form should be used, or one that’s not ideal for the given task, or only in a bad way without realizing its full potential.
Today, I don’t think any more that it makes sense to try to describe how things could work for various reasons, that’s a certain path to failure in terms of Harry’s question. What does work is to just doing it even at the risk that it turns out to be just my hobby-horse, that it ends up as a technical solution only for a technical user, that it creates other or more problems by not recognizing the needs of other stakeholders, being one-sided and not informed by the other perspectives, but my recipe against that is to work on my own needs, that way it can’t go terribly wrong. I realize that the needs of a single entity on its own monologuing in solitude are very different from what’s needed from a setting of two users and more (that are potentially non-cooperating entities, dialogue going on, need to respect the sovereignity/independence of the other entities, requirement to organize collaboration and methodology, etc.), but I still can start working on those capabilities later and connect my single-user individual capabilities with the multi-user scenario, what I really can’t do effectively is a multi-user scenario on my own, and pretending to have such with myself might miss the real practical needs of a real group that actually exists. It would be possible, but too risky, given the limited (time) resources available.
But to not leave it like that, a few practical suggestions:
- Data should self-declare its format and semantic meaning. Most of the time, the format/meaning (on several levels, from byte order to file type to format convention to versioned format vocabulary/onthology to advanced federation/augmentation recommendations) is unclear and determined by heuristical guessing, which is bad in terms of information encoding theory and prevents reliable technical bootstrapping of semantics, which would allow agents/clients/tools to automatically perform actions on data when encountered, without the need of a human manually deciding what to do. The format and semantic meaning should also be indicated in metadata, so a client can decide what resources to retrieve or not to retrieve, and what tool/component/converter might fit best to handle it.
- Servers and online services are a dependency. They can go away at any time, connectivity isn’t a given, they can change the way they work, and they’re always compromising confidentiality if the data isn’t public anyway or encrypted end-to-end on the client independently. There’s no real reason why the same capabilities shouldn’t be available and performed on the client side.
- Mixing executable code with data is a very, very, very bad idea. It never worked securely and might never work as long as Turing-complete machines exist. It didn’t work with Macros in Microsoft Word/Excel, it didn’t work in Adobe Flash, it doesn’t work with HTML in e-mails, it didn’t work with ActiveX or Java Applets, it doesn’t work with JavaScript on the web, and it accounts for a large amount of security breaches and we only tolerate it on the web because most people use crappy operating systems that don’t get their software from secure, independent, libre-freely licensed repositories, so the only way to securely run remote software by avoiding the even greater danger of locally installing such software is distribution/execution in the sandboxed browser, which in return prevents reasonable actions that are important for a decent hypertext environment and renders the web unfit as a text system into a mere environment for online applications. Furthermore, the web convinces users that the data and software somehow are integrated into a pre-defined, single experience. There’s no reason why the mere storage of data should not allow a plethora of different user interfaces and software functions, that are not dicatated by the provider who operates the data storage facility. Then, the client is able to apply different interpretations and renderings onto the same data, based on configuration, preference, recommendation, etc., or perform different actions on the data than the original author intended.
- A mechanism is needed that indicates beyond doubt the legal status of a human expression, so the user can filter for only those, so repositories/libraries of libre-freely licensed material can form, which can be used without having to worry about legal trouble as long as the terms of the libre-free licenses are observed.
- A capability standard + infrastructure is needed (an API to program/script against) that allows the flexible combination of individual functions to perform more advanced functions automatically, manually or in a combination of both. The actual implementations for each capability can be switched without the caller noticing (abstracting away the implementation behind standardized protocols), most of the capabilites should be invokable remotely as well as locally (and a mix of both in the same action), for which ideally the same implementation code should be usable online as well as offline without adjustment, if possible.
- To be continued…
Regarding trust, I go with the notion that’s the default for security-related software: design for as least need for trust as possible, and always leave the decision if, to what degree and whom to trust to the user. Proprietary software rarely offers this choice to the user, one has to blindly trust the vendor. There’s probably no technological solution for a social question like trust, but if trust is established in the social realm, there is technology available to ensure that no third external actor can abuse the established trust between two parties, which is also rarely used/implemented/offered by proprietary software.
It’s more likely that you can do everything on your own (except communication-related actions that precondition the exchange with a second entity) if you have the data and the software, there’s little reason why you shouldn’t have both. So you should be in absolute control over your own computing (notion of sovereignity, property, independence), but it’s also important to realize the actual fact that once you’ve given your data away to a second entity or have a function performed by somebody else, you waived all and every effective control over it forever. It’s not that your individual control over your copy ceased, but now others have gained the same control as well, which is independent of yours, so it’s not effectively enforcable beyond your own computing. This also applies down to software, the operating system and the hardware. If you’re not in control, somebody else is. It is very dualistic/binary, because you’re a single individual/entity that’s not identical with all the other individuals/entities.
Alignment/collaboration: the libre-free, so called “Open Source” world or projects like Wikipedia etc. do it like this: they align on single, specific causes, so they can be completely opposed on all the other points and still get something accomplished as they agree that this single point is important and beneficial to all participants, so it raises the level of everybody, by agreeing that they don’t need to compete or reinvent the wheel each for themselves, likely because that’s what they would have to do otherwise anyway and don’t see a reason why to invest when they could benefit from a shared effort. For this to work, there is the precondition to have a (human, social) policy in place what (legal) status the result will have, otherwise there’s no foundation/contract/agreement in place to establish some safety/protection/trust against the risk of loosing the investment/result of the effort, to have it destroyed some time later. Very little is needed for large-scale, diverse collaboration while retaining individual freedom beyond this industrial-style agreement that counters the industrial-age default mode of copyright law, as the other aspects then are subject to merit and emergence (how can something organically emerge if a single entity is in proprietary control of the outcome?). Reflecting on my own alignment, the insistence on 100% libre-free licensing shuts out 95% of potential collaborators, precisely because they either don’t know or care about the risks, have some other values/agendas that intend to have them in control in the end, or are tricked into the impression that it’s not too bad for them personally, which indeed is true, but there’s always somebody who will be the unnecessary victim of this view. If the project then is about text and hypertext in 2018, that shuts out the remaining 5%, and if some more percentage shares should be left, the position that the web model and server dependencies are a problem will do the rest, despite there’s improvement in that area as it becomes more fashionable to be against Facebook and the lack of usefulness of the web for the current AI hype. All of this doesn’t matter too much, as I know ways around it by just changing the focus a little.
> Well, yes, but then you have to trust them and those who have created them, even if there are many people who were the authors.
If you can theoretically check for yourself what those systems do, you don’t need to trust the creators. If you practically can’t, you can either find or pay somebody you trust to do that for you (who has a good reason/incentive to be critical against the creators). Also, if you don’t like/trust what a system component does, you should be able to replace it with one from somebody else, your own replacement or the one from someone you trust. There can be many of the latter, as everybody is allowed to create their own, share them and just plug them in into the standardized infrastructure. Not perfect, but one of the best approaches I know of.
> I might have not understood fully what your vision is…
It’s not super-important to have it understood, but thank you very much for trying 😉 A few simple, practical questions should be sufficient for determining what kind of beast a practical result could be and what it likely wouldn’t, because the sad truth about hard reality is that things won’t work according to their theoretical description if the theory is wrong. Sure, to have a good, decent result, much more conversation is beneficial, but I guess GCC participants are well aware of how difficult this is.
October 13, 2018 at 12:57 pm #338Hi Harry,
here’s an attempt to answer to #336: my parents divorced when I was at a young age (not in the good, up to this day and future), and I told myself that it doesn’t affect me too much as they’re separate entities/individuals, but in recent times realize that it’s worth to carefully and critically self-reflect on its potential influence. It’s dangerous to conclude that my values come from that, I’m not convinced myself, but I mention it because you guys are interested in those backgrounds, I guess. Belinda Barnet claims in Memory Machines that Douglas Engelbart and Ted Nelson were driven by the fear of loss, and while I doubt that it’s the same, I really hate to do the same thing twice, waste valuable lifetime for stupid reasons, do things in a way that ends them up in being stuck, etc., which led to a love for printed books and preservation of texts (as the main source of almost all we know), automatization, libre-free licensing, etc. At another occasion, I wrote down a few interests I care about: libre licensing, semantic net, open standards, avoiding dependencies, hypertext, library of the future, curation, offline usage, posterity, augmented reality as decentralized public infrastructure. I think we discussed values somewhere (not too sure), but I think values/principles/motivation derived from previous experience explains best why an individual makes the decisions the way he/she does, so I hope this description helps somewhat in place for a more detailed investigation. It feels I omitted quite a lot, but can’t tell exactly what it is, might depend on the context.
Before this year, I worked on a small, primitive digitalization/proofread project for a German Bible translation that’s in the Public Domain (last contributor died more than 70 years ago, you can imagine how useful their wording/language is for us today). I don’t know much about Dutch or English translations, but in German language, there are plenty of digital Bible texts, but almost none of them are authentic to their printed original, because somebody at some time (usually at times where computers and Internet became more popular for the general public, a time of enthusiasm to put it out in lack of other offers being available), without establishing which printed original was used, sloppy OCR with bad proofreading, and sometimes adjusting the text to “modernize” it or incorporate more recent developments or individual views, all used by readers without awareness or suspicion. Almost all modern translations are not usable because of copyright restrictions. By chance I found out that another group had already completed exactly the same translation I was working on for several years, independent of mine and the other one I knew about and was collaborating with, without notifying us and doing theirs in secret, so I put my project on hold and instead looked more into hypertext.
So I tried this year to connect with anything that looks like a practical effort to build hypertext systems in our days (there’s plenty of academic study, but little ambition to implement), namely Ted Nelson’s “New Game in Town”, Doug@50, jrnl and GCC, but over time, it always turned out that some of the words are used because it’s in fashion, but what really goes on are some deeper, hidden agendas informed by other values/priorities, some of which are: personal business interest, pressure to meet a certain deadline, avoiding to invest time/money in solving the issues so existing bad methods are re-used (WWW for instance), creative control, control over the creation, focus on artificial intelligence/reasoning, focus on documents, lack of time/interest/attention to discuss/study the problems, learn about them and join practical collaboration. It’s quite easy to pretend that those aren’t issues, but I haven’t seen anything that actually could get around the problems I have, allows me to do the things I want to do, or at least explains how I could stop worrying and start to love the bomb. Without doubt, text isn’t something to care about any more in 2018, its golden age ended some 20-30 years ago and is stagnating and even degenerating since then.
A lot of things can be seen as related or not related to Engelbart, they’re also related or not related to a lot of other people and things, but I guess my main overlap with the GCC is via Engelbart, but that’s only a very loose connection in terms of substance obviously.
-
AuthorPosts