pkgsrc wrapper script
rvp The problem is that a lot of people are using it that way while being 100% oblivious to the fact that it's inherently silly. The defense i usually hear is something along the lines of "I'm just using it to get a general idea of how a solution would look like." which doesn't hold water in my opinion as the time spent on figuring out if or how far whatever the AI comes up with is actually valid would usually also suffice to just read the relevant documentation or research actual examples that are proven to be valid or at least have a high likelihood of that.
To make matters worse people going at it this way often times also flat out lack the capability to fact check the result. I predict that technicians of the not to far future will have to deal with quite bit of bizarre errors introduced by AI written code/configuration that managed to superficially work but was still broken overall.
- Edited
nettester The problem is that a lot of people are using it that way while being 100% oblivious to the fact that it's inherently silly.
I should've qualified:
Specialized A.I. works (eg. AlphaZero).
LLMs also work--provided you
a) train them hard (eg. Grammarly, Github Co-Pilot), or
b) use it for is non-mission-critical stuff: for eg. generating pix of fantasy ladies (don't look too closely at their hands), or generating music.
Co-Pilot--which is not free--is trained on billions of lines of Python and Go code. How many lines of shell-script has ChatGPT been fed? Doubt it's anywhere close to a billion. More likely just a lot of stuff randomly scraped from the Internets. And I bet most of those were bash or zsh scripts--which is why A.I. thought /bin/sh should understand <<<.
The more code they study, the further they'll get from working code. When they've studied everyone's code, they'll only be as good as everyone collectively is. So it's only useful in the early stages (where we are). It's like free money distributed to everyone: it's only an advantage for the early spenders.
- Edited
rvp I should've qualified
Well, i think you pretty much did that by specifying chatbot AI. Sure, you can probably train specialized AI's to write code. No doubt about that but i think it's a long, long way before it'll actually be trustworthy to write longer portions without close supervision and it will always (well, at least for the foreseeable future) somewhat choke on complex problems regarding more exotic topics as it can only rely on existing data but not do any research of it's own let alone actually understand concepts. Besides actually defining complex problems well enough to get some clear target for an AI to work towards for producing an useful solution seems to be quite a hurdle.
I guess it will become very good at writing common code snippets though. I'm seriously looking forward to just hand some AI a struct definition and have it write the boring command line parsing logic that fills it 
Actually the chess video, as silly as it is, can be seen as an example of the limitations of AI. I mean, one just has to look at how both engines start off quite reasonable and get more and more eccentric the further the game progresses. The reason should be pretty obvious: The more complex the game state gets the less data they have to draw from pushing them to make decisions based on states that are further and further away from actual reality reducing the chances for them to make sense.
Sure, a specialized AI might be able to interpolate and deduct from multiple somewhat close data points but they still don't understand concepts or are able to actually invent things as they are basically parrots on steroids and as soon as they are confronted with something they can't simply copy it's down to their interpolation abilities.
Edit: Yeah, probably kind of like trying to interpolate a hand without understanding the concept of a hand.
- Edited
nettester but they still don't understand concepts
I think that even "human understanding" is a very rickety thing. I.e. you can't put too much weight on it because it is just a feeling after all--something generated in the brain saying "You're right, buddy!". I have had "Ah-ha!" moments of "understanding" many times, only to find on reflection that I was a) completely wrong, or b) only somewhat right, And, this keeps happening again and again.
My feeling for how the human brain works is that it's
a) essentially random in operation (but, constrained in a Humeian way to operate in certain channels--ie. evolution has wired the brain to operate in certain, specific ways and which helped my ancestors to survive), and
b) that covering (papering-over is more accurate, I feel) this essential randomness is a storyteller module which rationalizes (or justifies) whatever stuff the random layer keeps spewing, picking up some pieces and discarding others ultimately giving me a feeling of "understanding" something.
And this process is very fragile. If "understanding" weren't contrained by nature (evolution wiring the brain for survival), or science (which is essentially testing one's understanding), or simply other people (you have to act in certain ways to live in society), humans would just go completely off the rails. People in solitary confinement do. The brain runs wild. I know my brain is buggy--I'm a programmer after all
.
We've gone very much off-topic here so I won't belabour this any more, but, whenever anybody brings up human understanding, I try to get them to read Searle's Chinese Room Argument--which cuts both ways (both pro and anti-A.I.)!
- Edited
rvp Well, i also wouldn't claim that you're wrong. Understanding is a very broad thing, which includes a ton of try-and-error, misconceptions, ... and it would be foolish to think that those couldn't be - at least in theory - emulated. Looking at it rationally we are just walking-talking chemical factories after all but i think it's a long shot until machines will grasp stuff like bigger picture or said randomness, which after going through some kind of convoluted process results in creativity or a probably by definition deficient but still quite complex kind of understanding.
Given the fact that everything we do is based on some kind of chemical reaction i guess it isn't too far off to assume that if one could figure out all the dependencies and effects humans could become 100% predictable (and therefor emulatable/optimizeable, which, when taking into account that machines will be able to do everything infinitely more precise and faster, obviously would put them ahead of humans). If that's actually practically possible even in the long term is where i'm not fully convinced yet though.
In regards to sensory deprivation driving people insane: Well, it's an artificial environment anyways (normally humans doing 100% nothing would just die) and short of something like this the effect of uncheckedness varies quite widely. I've been living in a pretty detached place for a good amount of time with my only connection to humanity being saying "Hi" and "Bye" at a supermarket (well, supermarket is kind of stretch for something about as big as 4 toilet booths combined but the sign said so...) a couple kilometers away once every few weeks. I've talked to people who were straight up horrified by that idea but personally i'd say it's been a pretty peaceful time, which, if anything, made me more stable.
You are right though, we've gone quite far off topic here and i'll try to bite my tongue from now on too. It's a pretty fascinating topic, so i'm having a little bit of a hard time doing so sometimes (even at the risk of talking utter rubbish - it's quite complicated after all and i'm not a machine being able to draw from libraries worth of information and check millions of possibilities per second
). Apologies for the derailment.
nettester convoluted process results in
creativity
Ahh, creativity: another putatively human-only process. I give you 2 counter-examples
:
- Elephant painting
Art critics initially went, "What bold brush-strokes! what vivid colours! Magnifique!", etc, etc. But, when told it done by an elephant? Embarassed silence. - Vincent Van Gogh.
Sold, I think, only 1 painting--for a pittance--when he was alive. Now, considered a genius.
nettester if one could figure out all the dependencies and effects humans could become 100% predictable
I doubt this. Even well-understood 3-body problems are hard to solve analytically.
nettester In regards to sensory deprivation driving people insane:
Not insane, though that sometimes happens in the rare, odd case. I had in mind visual and auditory hallucinations. In sensory-deprived conditions, the engine runs wild, generating its own data, and then can't differentiate between the processed-from-internally-generated data vs. processed-from-reality data. Actually, this isn't surprising because what's really important for conciousness is, I think, the conclusions derived from that processing (or, the processing itself--not sure ATM). And, the brain always trusts its own conclusions--even if it they're bogus, because in the normal world, the conclusions it draws from processing external data is good enough to keep it alive.
nettester Apologies for the derailment.
Not at all. I think of this as a dialogue, a dialectic exchange. (at least, until @pin or @Jay shuts us down...)
kc9udx The more code they study, the further they'll get from working code. When they've studied everyone's code, they'll only be as good as everyone collectively is.
Hmm. There's a direct contradiction between the 2 sentences if you think about it, but, we'll skip that
.
Q: What if you train A.I. only on good code, then? (That may occasionaly produce non-working code; but then, so do humans.)
If your thesis is that A.I. will produce bad code if fed bad input, it follows that if fed good code, it should produce good data....Wait, are we in agreement here then? 
Well the main point is that AI, fake Intelligence, can only be almost as good, or, as good as real intelligence. It cannot ever be better. Can I prove that though? No, I don't even know if it's falsafiable, but I'm certain it's true. Just like I'm positive that natural selection cannot invent improvements.
- Edited
rvp Ahh, creativity: another putatively human-only process. I give you 2 counter-examples
:
Elephant painting
Art critics initially went, "What bold brush-strokes! what vivid colours! Magnifique!", etc, etc. But, when told it done by an elephant? Embarassed silence.
Vincent Van Gogh.
Sold, I think, only 1 painting--for a pittance--when he was alive. Now, considered a genius.
Well, i'd say creativity doesn't have to be appreciated or of value to be creative. Generally speaking the chaos on my desk is very creative and so is obviously the Elephant's painting (in a broad sense it isn't far off from a human anyways - same chemical factory technology just running a somewhat different setup). Pretty much every action that isn't overtly generic is more or less creative. It just gets interesting once the concept is applied in a useful way.
rvp I doubt this. Even well-understood 3-body problems are hard to solve analytically.
Yeah, like i've said, i'm also not convinced that it's actually practical. In mere theory i still think that it should be possible though, even if figuring out the processes is beyond the analytical or general cognitive abilities of even the smartest human. Following this further not only calls into question how creative creativity actually is but really how much of a free will humans even posses at all.
When every action is the result of some kind of chemical reaction it might not be so much that we follow our wishes when we do or think something but rather the result of external factors that came together (something like being born at a room temperature of 26.37424 °C, 1 apple for breakfast and 3 additional hydrogen molecules in the air we breathed at exactly the right nanosecond - just infinitely more complex) and set off a chain reaction finally resulting in what we perceive as decision. Religious people will probably disagree by default and personally i feel that the idea is pretty depressing but sadly i still think that it makes quite a bit of sense.
And, the brain always trusts its own conclusions--even if it they're bogus, because in the normal world, the conclusions it draws from processing external data is good enough to keep it alive.
Pretty much. If there's an actual need to overly distrust it's own perception there's probably something quite wrong. Like the individual is in a state of psychosis, where such distrust is probably one of the little things that could help in actually making useful decisions and a lot of people simply aren't capable of making that switch to "OK, so my brain is producing garbage. Now what?".
- Edited
kc9udx I think i get what you mean but i think it somewhat depends on the the definition of intelligence and invention. AI can't do research, so it can't really invent new information but it could (given it's advanced enough to make the connection) combine independent pieces of data no human has ever combined before to come to a previously unknown result. Does that make it intelligent or is it still just a complex parser with a big database?
Don't forget @JuvenalUrbino 
nettester it may be my bias, but it seems to me you answered your question with the latter answer.
I played around with thisa little in the 90s. I still have the working code somewhere. I've even thought about demonstrating it and making a YouTube video. But it's a bit like setting off a bomb in public and making a video. It scrapes comments from an IRC server and sorts them by similarity (albeit using very simple criteria). It will have a conversation with you. It will even start a conversation (usually in a very offensive way). It would even start up a new channel.
Unfortunately it has a very offensive vocabulary: It didn't just gather all comments. It picks people that converse with it and it follows them around. So you can probably imagine how that leads to vulgarity.
In 1998 it could run for a while on a large server and go unnoticed. Today it would just get banned pretty quick, and me along with it. But until the database got big, it would fool a lot of people.
- Edited
kc9udx Well, don't worry, i have a ton of bias in that regard myself. In a lot of ways i view AI as a quite massive form of competition (i mean, a handful or even a couple 1000 competitors are easy to disregard in any but maybe the smallest of niches but AI can basically effortlessly replicate into a number large enough to run out of zeros) and it's not exactly the kind of competition you are sympathetic with due to some kind of nice personality. There's been a lot of technologies i didn't care for or even disliked but it has been relatively easy to ignore them while AI is actively invading my turf. I know perfectly well that resistance is futile but if i'm being honest i'll have to admit that i have developed some kind of serious aversion to AI in general.
kc9udx I played around with thisa little in the 90s. I still have the working code somewhere. I've even thought about demonstrating it and making a YouTube video. But it's a bit like setting off a bomb in public and making a video. It scrapes comments from an IRC server and sorts them by similarity (albeit using very simple criteria).
You know what's funny? I've built pretty much the same thing. Likely a little later and even more simple but still quite the coincidence!
kc9udx Unfortunately it has a very offensive vocabulary: It didn't just gather all comments. It picks people that converse with it and it follows them around. So you can probably imagine how that leads to vulgarity.
Isn't that like part of the fun? 
In 1998 it could run for a while on a large server and go unnoticed. Today it would just get banned pretty quick, and me along with it. But until the database got big, it would fool a lot of people.
Well, my version already got banned regularly back in it's day (i doubt it would be able to hide at all these days as there simply isn't enough traffic of random people to blend in with anymore). It was a time where the CONNECT method of http proxies was practically unheard of though, so whenever that happened it would just reconnect with a new nickname using yet another proxy.
It could usually fool people for a little while into thinking they were talking to a quite annoying/confused individual and then make them angry when they realized they had just wasted time on that damn bot again. It was brilliant. Even like 20 years later i still get bursts of laughter thinking back at the bizarre conversations that resulted from this 
I'm not 100% sure if these concepts can really be seen as kind of the stoneage origins to AI though. Logic somewhat points towards that being the case but as i'm quite ignorant to the actual technology behind AI i (aversion and all that don't help getting into something) i can't really be certain if there isn't something more to it i just haven't learned about yet, i guess.
- Edited
kc9udx It cannot ever be better.
I think it can be better--much, much better, if A.I. are trained hard. Witness AlphaZero. What we're seeing are the first baby-steps.
kc9udx Just like I'm positive that natural selection cannot invent improvements.
Right: natural selection only selects variations which are adaptive. And, variations arise randomly.
Improvement is not a precise term here. Evolutionary biologists prefer to say, I think, that species are only better or worse adapted to their environment. (Whether being better adapted can be called an improvement, I don't know--I'm happy to follow the science guys on this.)
So: variation (aka. mutation) invents. Natural selection only, er... selects 
nettester Well, i'd say creativity doesn't have to be appreciated or of value to be creative
You can have creativity w/o intelligence. Eg. here or the mating-calls/courtship rituals of birds. (The things guys will do to impress the ladies...sheesh!)
nettester nd so is obviously the Elephant's painting (in a broad sense it isn't far off from a human anyways - same chemical factory technology just running a somewhat different setup).
That's the crux of my point about this whole thing. There is a spectrum here: humans at point H. On the left of this (decreasing intelligence/creativity) you have chimps, elephants, corvids, octopi and our little puffer-fish. A.I., too, is here (even if it doesn't run on chemical machinery), slightly to the left of us humans now. Pretty soon, they'll be on our right, and I can't think of any technical obstacles which would bar their way. Don't know if A.I. can ever be fully concious of itself--I can't find a difficulty here either.
pin Don't forget @JuvenalUrbino
I have a feeling that instead of stopping this, he'd be jumping in with both feet. (This being more in his line of country...)
- Edited
rvp That's the crux of my point about this whole thing.
Well, i guess we don't fundamentally disagree here. The main question being how far or in what domains digital technology will be able to match or even surpass the results of chemical technology and i guess we'll find out rather sooner than later.
rvp Don't know if A.I. can ever be fully concious of itself--I can't find a difficulty here either.
Well, self consciousness is a quite difficult concept. Implementing it would basically mean to have AI develop some kind of ego. I guess it would at least be hard as the whole thing is probably one of the most human features there is. If it would actually be a good idea to give it something like this is also a whole topic of it its own as there's a lot attached to having an ego. I mean, lets assume for a second AI would have an ego. That would pretty much mean that it would be capable of being hurt by my expression of AI-phobia and become vengeful. Add the possibility of it becoming good enough to reprogram itself and you basically have an artificial life form out for revenge. Might sound a little for fetched and maybe not that threatening as it's currently still contained in it's digital cage but as fascinating as the whole thing is it comes with a lot of problems.
I pretty much think that we currently are at some nuclear fission type of point in history, only that the impact might be a magnitude bigger this time. Even if one was to put aside the problems that could arise from adding human-like features or the possibility of AI spiraling out of control due to self modification abilities and assume that the whole thing stays inside it's stated goals there is still a good probability of a very bleak future lying ahead of us. Sure even if AI was to replace any kind of intellectual occupation i'd still have the option to move towards construction but that would just buy a bit of time as AI controlled robots will take that over soon after, which would continue for really any kind of manual labor.
In the end humans would become fully superfluous and be damned to try to occupy their brains with mindless consumption in a desperate attempt to not go insane out of sheer boredom/purposelessness. To make matters worse the AI technologies running the place then would likely be controlled by a quite tiny amount of entities (up to the point those actually stay controllable) leading to the biggest amount of monopolistic centralization we have ever seen. With 99% of the population confined to doing absolutely nothing at all education levels would probably also drop to the bottomless within just a couple generations leaving some kind of anti-civilization governed by machines.
While i can perfectly understand the scientific fascination with the subject and it's accomplishments i also think there's a real danger that we might be moving towards a seriously dystopian future, which would make a couple super bombs dropped on some cities and the continuous threat of global destruction look like a children's birthday party. Obviously noone is going to stop technological advancement and trying to do so is stupid (just like if 4 scientists refuse to build thought reading technology it'll just be done by the 5th guy - people calling for political solutions are simply delusional) but i still hope that it's going to hit a brick wall at some point, which is obviously nothing but wishful thinking, so...
nettester self consciousness is a quite difficult concept. Implementing it would basically mean [...]
But why would you want to give A.I.s a conciousness anyway? Or, bother implementing an ego feature? These are surplus to requirement, I think. Take AlphaZero, it beat the pants off its human opponents w/o any ego or self-conciousness.
Re: self-conciousness, I'll go with what I read in GEB, that conciousness can emerge if a system is capable of self--reference. If it can look back on it's own decisions, and come up with a narrative to explain those decisions, then you have a kind of conciousness. Human conciousness is just this kind of self-referential awareness, I think. Navel-gazing like this also has, I feel, an evolutionary basis: I acted like this in the past, if I do this again now, will it work, or will I get eaten?
Anyway, it's just evolutionary baggage. Leave it behind. And, since conciousness (which I equate to telling a story about its own actions) also implies an ability to dissimulate, what's the benefit of an A.I. which can lie?:
RVP: Draw me a pretty elf lady.
A.I. Here you go.
RVP: Stupid A.I., you got the hands wrong, again!
A.I.: No I didn't (actually, it flubbed it, but now is lying through its virtual teeth.)
A.I. It's abstract art.
RVP: Look you, hands just don't bend that way.
A.I.: Human necks don't bend this way either. You obviously don't understand my non-representational art.
RVP: Don't pull a Jackson Pollock on me.
So I'm not at all worried about sentient, lying, A.I.s bent on apocalypse.
nettester To make matters worse the AI technologies running the place then would likely be controlled by a quite tiny amount of entities (up to the point those actually stay controllable) leading to the biggest amount of monopolistic centralization we have ever seen.
Yes! Yes! This is the real worry. Already powerful people (goverments, corporations, rich folks who can simply buy better A.I. than the rest of us) using A.I. to extend their control over us plebs in massive ways. Or, some guy's A.I.-assisted fiddling in the stock markets precipitating another Great Depression. These, I think, can easily happen.
What will not happen, I fear, is using A.I. to make law and govt. better than it is now. Trained A.I. should be able to judge cases more impartially than most judge-and-jury arrangements. Or, use A.I. to generate and administer govt. schemes instead of letting our self-serving politicians do it:
“The major problem—one of the major problems, for there are several—one of the many major problems with governing people is that of whom you get to do it; or rather of who manages to get people to let them do it to them.
To summarize: it is a well-known fact that those people who must want to rule people are, ipso facto, those least suited to do it.
To summarize the summary: anyone who is capable of getting themselves made President should on no account be allowed to do the job.”
― Douglas Adams, The Restaurant at the End of the Universe
I think this works @pfr , i wrote it myself.
#!/usr/bin/env ksh
PKGSRC_LOC=~/pkgsrc
echo $PKGSRC_LOC
find "$PKGSRC_LOC" -type d -maxdepth 2 | sed -E 's|^([^/]*/){4}(.*)$|\2|' | fzf --bind "enter:become(cd $PKGSRC_LOC/{} && bmake install && bmake clean)" --preview "bat $PKGSRC_LOC/{}/Makefile"oblivikun that looks neat, I'll give it a try. I'm curious as to why you use bmake instead of make ?
because bmake works on linux where gmake is the default