http://www.sciencedaily.com/releases/2011/04/110420125510.htm
Abstract designs scratched on mineral pigment show up in Africa about 75,000 years ago and are widely accepted by archaeologists as evidence for symbolism and language. "From this point onward there is a growing variety of new types of artifacts that indicates a thoroughly modern capacity for novelty and invention."
While crude stone tools crafted by human ancestors beginning about 2.5 million years ago likely were an indirect consequence of bipedalism -- which freed up the hands for new functions -- the first inklings of a developing super-brain likely began about 1.6 million years ago when early humans began crafting stone hand axes, thought by Hoffecker and others to be one of the first external representations of internal thought.
------
D - the evidence in language itself is indirect and tenuous.
Scholars estimate that PIE may have been spoken as a single language (before divergence began) around 3700 BC, though estimates by different authorities can vary by more than a millennium.
Proposed genetic connections
Many higher-level relationships between Proto-Indo-European and other language families have been proposed, but these hypothesized connections are highly controversial. A proposal often considered to be the most plausible of these is that of an Indo-Uralic family, encompassing PIE and Uralic. The evidence usually cited in favor of this consists in a number of striking morphological and lexical resemblances. Opponents attribute the lexical resemblances to borrowing from Indo-European into Uralic. Frederik Kortlandt, while advocating a connection, concedes that "the gap between Uralic and Indo-European is huge", while Lyle Campbell, an authority on Uralic, denies any relationship exists.
Other proposals, further back in time (and proportionately less accepted), link Indo-European and Uralic with Altaic and the other language families of northern Eurasia, namely Yukaghir, Korean, Japanese, Chukotko-Kamchatkan, Nivkh, Ainu, and Eskimo-Aleut, but excluding Yeniseian (the most comprehensive such proposal is Joseph Greenberg's Eurasiatic), or link Indo-European, Uralic, and Altaic to Afro-Asiatic and Dravidian (the traditional form of the Nostratic hypothesis), and ultimately to a single Proto-Human family.
A more rarely mentioned proposal associates Indo-European with the Northwest Caucasian languages in a family called Proto-Pontic.
Indo-Uralic is a hypothetical language family consisting of Indo-European and Uralic.
A genetic relationship between Indo-European and Uralic was first proposed by the Danish linguist Vilhelm Thomsen in 1869 (Pedersen 1931:336) but was received with little enthusiasm. Since then, the predominant opinion in the linguistic community has remained that the evidence for such a relationship is insufficient. However, a minority of linguists has always taken the contrary view (e.g. Henry Sweet, Holger Pedersen, Björn Collinder, Warren Cowgill and Jochem Schindler).
History of opposition to the Indo-Uralic hypothesis
The history of early opposition to the Indo-Uralic hypothesis does not appear to have been written. It is clear from the statements of supporters such as Sweet that they were facing considerable opposition and that the general climate of opinion was against them, except perhaps in Scandinavia.
Károly Rédei, editor of the standard etymological dictionary of the Uralic languages (1986a), rejected the idea of a genetic relationship between Uralic and Indo-European, arguing that the lexical items shared by Uralic and Indo-European were due to borrowing from Indo-European into Proto-Uralic (1986b).
Perhaps the best-known critique of recent times is that of Jorma Koivulehto, issued in a series of carefully formulated articles. Koivulehto’s central contention, agreeing with Rédei's views, is that all of the lexical items claimed to be Indo-Uralic can be explained as loans from Indo-European into Uralic (see below for examples).
D - looks like the jury might be out on this forever.
Monday, April 25, 2011
Wednesday, April 20, 2011
octomatics revisited. a 16 base hexadecimal one
D - I am sorry for the quality of this image. My Win7 is disabled, so I ended up taking a webcam shot.
http://www.infoverse.org/octomatics/octomatics.htm
description
the octomatics project is about a new number system
which has a lot of advantages over our old decimal system.
the name comes from the mixture of 'octal' and 'mathematics'.
what do you think: why do we have the decimal system
in our western world? because of our 10 fingers? why
do we have 7 days a week? why are 60 seconds 1 minute
and 60 minutes 1 hour? why do we have 24 hours a day?
and 31 or 30 days a month? do you think thats a really
good solution? well, here is another one:
...welcome to octomatics !
the new numbers
how many numbers are the optimum? 8? 10? 12? 16?
i think it's 8 or 12. make it 8 and you will be able to read
and work with binary code without any transformation.
i think the numbers should look more technically than
letters. maybe they could look like the following:
------
D - I agree with him - though chose 12 instead.
CVN will have English-style names for 11 and 12, instead of using the usual 10 and 1/ 10 and 2 convention. If I name stuff after the #s, then this makes for neatness and brevity.
See http://disnid6.livejournal.com/1429.html
-------
D - Anyway, Octomatics is based on 8. My proposed # system above is based on 16.
Why 16?
Well, really, I just wanted to reach 12 in some sensible fashion.
But now a little review of binary and hexadecimal.
http://en.wikipedia.org/wiki/Binary_numeral_system
The Indian scholar Pingala (circa 5th–2nd centuries BC) developed mathematical concepts for describing prosody, and in so doing presented the first known description of a binary numeral system.[1][2] He used binary numbers in the form of short and long syllables (the latter equal in length to two short syllables), making it similar to Morse code.
Representation
Any number can be represented by any sequence of bits (binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states.
Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Decimal counting uses the symbols 0 through 9, while binary only uses the symbols 0 and 1.
Since binary is a base-2 system, each digit represents an increasing power of 2, with the rightmost digit representing 20, the next representing 21, then 22, and so on. To determine the decimal representation of a binary number simply take the sum of the products of the binary digits and the powers of 2 which they represent.
D - fractions are a real bugger though...
Binary may be converted to and from hexadecimal somewhat more easily. This is because the radix of the hexadecimal system (16) is a power of the radix of the binary system (2). More specifically, 16 = 24, so it takes four digits of binary to represent one digit of hexadecimal, as shown in the table to the right.
To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits:
Hexadecimal
In mathematics and computer science, hexadecimal (also base 16, or hex) is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to represent values zero to nine, and A, B, C, D, E, F (or alternatively a–f) to represent values ten to fifteen. For example, the hexadecimal number 2AF3 is equal, in decimal, to (2 × 163) + (10 × 162) + (15 × 161) + (3 × 160) , or 10,995.
Each hexadecimal digit represents four binary digits (bits) (also called a "nibble"), and the primary use of hexadecimal notation is as a human-friendly representation of binary coded values in computing and digital electronics. For example, byte values can range from 0 to 255 (decimal) but may be more conveniently represented as two hexadecimal digits in the range 00 through FF. Hexadecimal is also commonly used to represent computer memory addresses.
Binary conversion
Most computers manipulate binary data, but it is difficult for humans to work with the large number of digits for even a relatively small binary number. Although most humans are familiar with the base 10 system, it is much easier to map binary to hexadecimal than to decimal because each hexadecimal digit maps to a whole number of bits (410). This example converts 11112 to base ten. Since each position in a binary numeral can contain either a 1 or 0, its value may be easily determined by its position from the right:
With surprisingly little practice, mapping 11112 to F16 in one step becomes easy: see table in Representing hexadecimal. The advantage of using hexadecimal rather than decimal increases rapidly with the size of the number. When the number becomes large, conversion to decimal is very tedious. However, when mapping to hexadecimal, it is trivial to regard the binary string as 4-digit groups and map each to a single hexadecimal digit.
--------
D - check out purplemath.com for all sorts of goodies related to alternative #-base systems.
http://www.purplemath.com/modules/numbbase.htm
--------
D - why use my 16-base # system?
1) it works with a 7-segment alphanumeric display - so any calculator.
2) the figures are the most minimal possible without unconnected stray "floating"
segments.
3) it has all the benefits of Octomatics
4) it works very well with computers due to hexadecimal.
Hope you like it.
I'll try to make a nicer diagram at some point.
Saturday, April 16, 2011
africa shown to be birthplace of languages - study
http://www.sciencedaily.com/releases/2011/04/110415165500.htm
An analysis of languages from around the world suggests that, like our genes, human speech originated -- just once -- in sub-Saharan Africa. Atkinson studied the phonemes, or the perceptually distinct units of sound that differentiate words, used in 504 human languages today and found that the number of phonemes is highest in Africa and decreases with increasing distance from Africa.
The fewest phonemes are found in South America and on tropical islands in the Pacific Ocean. This pattern fits a "serial founder effect" model in which small populations on the edge of an expansion progressively lose diversity. Dr Atkinson notes that this pattern of phoneme usage around the world mirrors the pattern of human genetic diversity, which also declined as humans expanded their range from Africa to colonise other regions.
An analysis of languages from around the world suggests that, like our genes, human speech originated -- just once -- in sub-Saharan Africa. Atkinson studied the phonemes, or the perceptually distinct units of sound that differentiate words, used in 504 human languages today and found that the number of phonemes is highest in Africa and decreases with increasing distance from Africa.
The fewest phonemes are found in South America and on tropical islands in the Pacific Ocean. This pattern fits a "serial founder effect" model in which small populations on the edge of an expansion progressively lose diversity. Dr Atkinson notes that this pattern of phoneme usage around the world mirrors the pattern of human genetic diversity, which also declined as humans expanded their range from Africa to colonise other regions.
Friday, April 15, 2011
study questions language universals
http://www.bbc.co.uk/news/science-environment-13049700
The paper asserts instead that "cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states".
"The [authors] suggest that the human mind has a tendency to generalise orderings across phrases of different types, which would not occur if the mind generated every phrase type with a unique and isolated rule.
The paper asserts instead that "cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states".
"The [authors] suggest that the human mind has a tendency to generalise orderings across phrases of different types, which would not occur if the mind generated every phrase type with a unique and isolated rule.
Sunday, April 10, 2011
colour-coded pixels to express ascii for code
http://www.geekosystem.com/writing-code-ms-paint/
D: it has applications for any information.
I read a book called "Information Anxiety" about presenting data in a clear fashion.
These days, I use colour themes with highlighters while studying.
For example, a definition is one colour, biographies are another, et al.
My DIY Magnetic Poetry for literacy students will include a colour theme for grammatical category. Not perfect, since sometimes English is unchanged between such categories.
D: it has applications for any information.
I read a book called "Information Anxiety" about presenting data in a clear fashion.
These days, I use colour themes with highlighters while studying.
For example, a definition is one colour, biographies are another, et al.
My DIY Magnetic Poetry for literacy students will include a colour theme for grammatical category. Not perfect, since sometimes English is unchanged between such categories.
Thursday, April 7, 2011
auxiliary verbs. may versus might
http://www.theglobeandmail.com/news/arts/russell-smith/do-you-know-the-difference-between-may-and-might/article1973684/
Russell Smith - Russell Smith | The Globe and Mail
Russell Smith: On Culture
Do you know the difference between ‘may’ and ‘might’?
Russell Smith | Columnist profile | E-mail
Globe and Mail Update
Published Wednesday, Apr. 06, 2011 4:44PM EDT
Last updated Wednesday, Apr. 06, 2011 4:53PM EDT
40 comments
Email
Tweet
Print
Decrease text size Increase text size
May is an auxiliary verb; its past is might. So you say “I may show up tonight” and your friend then reports “She said she might show up tonight.” Seems simple enough.
But it is nothing of the sort. The difference between may and might is one of the most frequent subjects of puzzlement in letters I receive from readers. Purists are annoyed that so many publications seem to use the two interchangeably, and others are confused about why it’s important. Expert sources point out a whole raft of subtleties here.
More related to this story
There's no tense like the present
A nation of language obsessives has roared
At the risk of sounding archaic, I still think that media is plural
Reader Carol Bream sent me a citation from the Ottawa Citizen that encapsulates the problem with the haziness about these verbs. This sentence ran in that paper on March 8: “The new cutting-edge concrete may have made a difference in the deadly collapse of a highway overpass in Laval in 2006 which crushed two vehicles, killing five people and seriously injuring six others who were driving on top of the overpass at the time.” The sentence is ambiguous. A reader may think momentarily that the writer doesn’t know the actual outcome of the accident. The new concrete “may have made a difference” – in other words, there was new concrete there, and we don’t know if it played any role in the tragedy? No: We do know that there was only old concrete involved; what we are wondering is whether new concrete would have made a difference. So “might” is more appropriate in that sentence.
Here is another example to make this tricky point a little less murky: “I was so distracted I might have fallen in a puddle” means “I was at risk of falling in a puddle but didn’t.” “I may have fallen in a puddle” would mean “I don’t have a clear recollection of whether I did or not.”
Here’s another example of what not to do: “He was a highly talented minor-league player, so he may have gone on to the NHL.” This suggests we don’t know if he went into the NHL or not. Using “he might have gone on” would indicate he could have gone if he had wanted to.
In short, use might when you are speculating about what could have affected a situation in the past, use may when you are uncertain of what actually happened. (“He may have thought I was insulting him.”)
Note that these sentences are all about the past. When you’re talking in the present, the differences between the verbs is much less definite. In everyday speech, the two are often interchangeable:
-----
Me - did not know that.
-------
Me - what is with the last comma?
But Mr. Beck’s bombastic, and regularly offensive, commentary had become a drag on the Fox brand.
http://www.theglobeandmail.com/news/world/konrad-yakabuski/fox-drops-a-demagogue/article1974164/
Russell Smith - Russell Smith | The Globe and Mail
Russell Smith: On Culture
Do you know the difference between ‘may’ and ‘might’?
Russell Smith | Columnist profile | E-mail
Globe and Mail Update
Published Wednesday, Apr. 06, 2011 4:44PM EDT
Last updated Wednesday, Apr. 06, 2011 4:53PM EDT
40 comments
Tweet
Decrease text size Increase text size
May is an auxiliary verb; its past is might. So you say “I may show up tonight” and your friend then reports “She said she might show up tonight.” Seems simple enough.
But it is nothing of the sort. The difference between may and might is one of the most frequent subjects of puzzlement in letters I receive from readers. Purists are annoyed that so many publications seem to use the two interchangeably, and others are confused about why it’s important. Expert sources point out a whole raft of subtleties here.
More related to this story
There's no tense like the present
A nation of language obsessives has roared
At the risk of sounding archaic, I still think that media is plural
Reader Carol Bream sent me a citation from the Ottawa Citizen that encapsulates the problem with the haziness about these verbs. This sentence ran in that paper on March 8: “The new cutting-edge concrete may have made a difference in the deadly collapse of a highway overpass in Laval in 2006 which crushed two vehicles, killing five people and seriously injuring six others who were driving on top of the overpass at the time.” The sentence is ambiguous. A reader may think momentarily that the writer doesn’t know the actual outcome of the accident. The new concrete “may have made a difference” – in other words, there was new concrete there, and we don’t know if it played any role in the tragedy? No: We do know that there was only old concrete involved; what we are wondering is whether new concrete would have made a difference. So “might” is more appropriate in that sentence.
Here is another example to make this tricky point a little less murky: “I was so distracted I might have fallen in a puddle” means “I was at risk of falling in a puddle but didn’t.” “I may have fallen in a puddle” would mean “I don’t have a clear recollection of whether I did or not.”
Here’s another example of what not to do: “He was a highly talented minor-league player, so he may have gone on to the NHL.” This suggests we don’t know if he went into the NHL or not. Using “he might have gone on” would indicate he could have gone if he had wanted to.
In short, use might when you are speculating about what could have affected a situation in the past, use may when you are uncertain of what actually happened. (“He may have thought I was insulting him.”)
Note that these sentences are all about the past. When you’re talking in the present, the differences between the verbs is much less definite. In everyday speech, the two are often interchangeable:
-----
Me - did not know that.
-------
Me - what is with the last comma?
But Mr. Beck’s bombastic, and regularly offensive, commentary had become a drag on the Fox brand.
http://www.theglobeandmail.com/news/world/konrad-yakabuski/fox-drops-a-demagogue/article1974164/
Subscribe to:
Posts (Atom)