Archive for myfreeforum.org Before posting please check the "stickies" in the support forums.
Please ask questions in real English and not "txt". You will get a better response.
Please do not ask support questions via PMs.
 

The free forums are now under new ownership, a full announcement will be made shortly

       myfreeforum.org Forum Index -> Off Topic
Bravo

Learning a new language (Python)

Been a long long time since I was involved professionally with computer games programming, but recently making modifications to the code of a 'mod' to a game (I just sussed the syntax of the code out as I went along) has given me the 'bug' again.

Been researching last few days on what actual language would suit me best which is no easy task in itself.

C++ , visual basic, dark basic, c# all got a look in but in the end it looks like Python might be the one easiest to learn yet with most control.

I'm ok with program structures and logic.  I used to sit at a blank ZX Spectrum and type in a football manager game from scratch in the space of a few hours, the longest part was inputting the data for the team names (got from Sunday paper) and player names (phone book lol).  The most challenging bit was sorting the fixture list out, a 2d array was how I did it, say 20,20 (or however many teams per league), and eg (1,5)=15 means team 1 is at home to team 5 in week 15 of the game and so on.  Obviously a simple system and would require more if I wanted to add such as postponements etc.

Looking forward to some cracking AI challenges for the project I have in mind, which is a war strategy type game (yep I know there are loads out there, doing it for me personally though).  Pathfinding and AI battle planning should be great fun!

Anyway, at very nearly 40 years old, let's see if the old dog can learn new tricks
myff admin

http://forum.downsizer.net/viewtopic.php?t=60368

A recent experience of c and c++
Bravo

Quote:
This sort of thing lurks in an awful lot of software, processors are so powerful few people think of doing things properly any more


So very true.  I've even seen tutorials where people say 'gah don't worry about memory these days there's plenty of it.

When we were coding in Z80, we had to constantly condense the code.  Making games for a 48k machine is a whole other world to what's there today.

I remember a fully functioning (with graphics - of a sort) chess game on an unexpanded ZX81.  And for those that don't know, the ZX81 was a 1k machine.  Yes 1024 bytes only and they got a chess game on it with graphics.  Would be a great exercise for a modern day programmer to do that.
myff admin

Indeed.

Except to be a fair test, it would have to be 16kb or maybe 8kb (I'm not into current assembler) as memory addresses and basic op codes are now much longer!
Bravo

16kb is acceptable as that was an expanded ZX81 with the RAM pack that you stuck into the back, but kept falling out if you didn't stick it on with blu tac  

Oh those were the days lol
Bravo

However, the graphics on a modern machine have to print to a monitor capable of millions of colours, and if you printed to the screen, you would be forced to use that memory, so an allotment of 3x3 bytes per piece should be knocked off.  Pawn, Rook, Bishop, Knight, King, Queen.

9x6 = 54 bytes

Presumably they inversed the imagery within the code, or maybe even managed to reduce that code by a quarter as the 8x8 bit blocks that made the images were made by using either all black, all white, top left, bottom right, top right, bottom left, diagonal right, diagonal left.  A total of 8 possibilities per 8x8bit block.  So perhaps they assigned values that way, in which case you could hold the values for 2 whole blocks in one byte by using the 2 lots of 8 available.  This reduces the 54 bytes needed by half, or they may even have developed a mathematical algorithm to achieve it using even less code.

In short, probably to be a fair test it could be a case of either disregarding the code and data memory usage needed for graphics, or making it a text only game.
Bravo

In case anyone else has a hankering to try it, the tutorials I am running with (currently up to the 21st as I write) are well thought out and interesting.  I read during my researching that this guy was good, and he also does tutorials for other languages.

The Python tutorials are here: http://www.thenewboston.com/?cat=40&pOpen=tutorial
judy

Ha ha, at nearly 40 you're just a youngster Bravo!! When I first started as a programmer there were no  monitors at all, mainframe computers had a teletype as the console, input was on paper tape or punched cards and the magnetic tape drives were taller than me  
myff admin

Sounds like you go a little further back than me, though yes I have punched a card  
judy

Ah, so you'll have bodged-in a chad then
Bravo

judy wrote:
Ha ha, at nearly 40 you're just a youngster Bravo!!


I'm having that  

And no, nobody else need add to that, it's fine as it is  

We'll be having Alan Turing commenting next  Wink
Frankonline

The first computer programme I wrote was in 1966  on a Mainframe the size of a house. It had a memory of 16k. You had to write it out in Algol on a sheet of paper, hand it in to the computer dapartment, and eventually one of the girls would type it in to a punched paper tape machine to make it useable. About a week after I'd written it they would have run it on the computer and I'd get a sheet of paper back with a load of error messages on it. Happy days.
judy

You're a few years ahead of me (not many!) but I remember those days too - I had one job where I never even saw the computer in all the time I was there - wasn't all that long because I hated it. Never did algol - started off with plan, moved on to cobol, dabbled in fortran, basic and rpg2, then went back to an assembler language on visible record computers and eventually back to cobol again. All the cobol work just stopped after y2k so I managed to escape then....
Frankonline

After 1970 I had a break from computing for 13 years and bought a BBC micro. I learned to programme that, and went on to my first PC in 1992. I learned VBA as I'm a spreadsheet freak, and did Access Vba too. I also had (still have) a Psion 3 organiser and have written loads of programs in OPL  for it. In 1996 when I retired, I went back to college to get a HNC in computing and did Cobol, C, C++ and assembly. Finished off with a web design course back in 1999. This template editing is the first new stuff I've done since then and I'm having fun with that at the moment. But Python ?  I'm too busy at the moment LOL !
Bravo

Haha, a blast from the past:



Done a bit of research and the chess game didn't have all rules (castling, queening, and en passant capture are missing), but still a good effort has to be said.

I did find an article from 1983 that someone had put online:

http://users.ox.ac.uk/~uzdm0006/scans/1kchess/
judy

Frankonline, you might enjoy this little poem from an ICL poetry competition, c.1975:

The Rime of The Ancient Programmer: http://www.scribd.com/full/500869...cess_key=key-25guqaseog2d2or331c1
Frankonline

Yes that was worth reading   . I still have my cardboard box with a roll of punched tape in it !
myff admin

In a amongst a million other things I'm trying to deal with which does not help the concentration, I'm looking at the next phase of the process I previously reduced from 4 hours to 6 seconds.


I have given up trying to understand it, as it is in c++ with next to no useful comments, laughably as I read it, and I'm 90% certain I'm wrong somewhere, it is probably meant to take a set of input files and merge them based on a simple criteria into an output file, except it is in fact called sequentially on single input files and hence will just take hours on each file before outputting exactly what it read in, in the first place  

At this point in time, nothing would totally amaze me.
Bravo

What is the actual task of the processing? ie broken down into the main tasks in English (as if you were writing the main comments before putting the code in).

Reason I ask is a lot of what I have been studying is sorting and processing theory and how to apply algorithms to a search process of lists.  It was new to me, may not of course be new to you, but the processes involved would save a heck of a lot of time depending on your needs.
Bravo

For anyone that is interested in those search algorithms and processes and efficiency, the MIT lecture about it starts here and there are a further couple of videos after it (they will play automatically)

It starts at 19:02 when the professors switch, or the link that takes you to the right positions automatically is: http://www.youtube.com/watch?v=tu...feature=player_detailpage#t=1142s


Link


Might be teaching most of you to suck eggs, but should be interesting for a lot of people.  It starts off as very basic but that is just a prelude to more complicated theory.

[edit: The vid doesn't automatically play the next on the playlist in this form, so the follow on video is: http://www.youtube.com/watch?v=ewd7Lf2dr5Q&p=4C4720A6F225E074 ]
myff admin

Bravo wrote:
What is the actual task of the processing? ie broken down into the main tasks in English (as if you were writing the main comments before putting the code in).

Reason I ask is a lot of what I have been studying is sorting and processing theory and how to apply algorithms to a search process of lists.  It was new to me, may not of course be new to you, but the processes involved would save a heck of a lot of time depending on your needs.


Well the main task is elusive.

What I would say should be the process is basically to do what I have done already, but rather than merge a single stream of individual items, to merge a set of files that  are basically the same data but now with counts where that merging has been done on the dataset.

e.g. just more of the same, slightly different.

Trouble is the code and  the way it is called shows no sign of doing this.
myff admin

I finally understand....

The process is meant to merge data, and the data once merged will cover today's date and yesterdays date.

So we are inevitably going to have to merge new incoming data with the old data.

But the method used does 4 loads of merge with the old data as it runs 4 times to deal with each set of incoming data separately which is bad enough, but and this is the bit I was missing the merge with old data is buried deeply in the C++ and will only deal with the data for one date at a time. So we have logic if you can call it that which says:

is this lines date field the same as the date I was on.... if not then save out everything for the date I'm on, and read in all the data for the date we now want.

Can you imagine just how badly that behaves if the dates are not in sequential order  (and they are not)

It's tragically bad.
Bravo

ok just as an exercise in logic seeing as I am now in training   (I fully understand I'll get it wrong but hopefully I'll learn something in the process).

As I understand it, this is the problem:

We have 4 fields:  Yesterday numberid + data, with today numberid + data

These are not ordered lists.

So the problem is to decide whether it is quicker to read them in a linear way (ie as is) or to sort first.

To sort first would work out at n2 for each list (worst case) (n being length of the list)
eg read through the list, if the id is lowest it stays in the placeholder variable, if not it switches, with the count boolean tripping (reset each look)
if the count boolean doesn't trip then the list is already ordered which more than likely will be before the end of n2, but you'd have to plan as n2

Then do the same again for the list3 = list1+list2 or however the data is to be stored.  So, all in all it would work out at ((n2)*2)+n) to sort the lists first

To read 2 unordered lists I guess you'd have to read it assuming the value you are looking for would average out exactly at the centre of the list, and of course just an increment for the list you are comparing it with.

So that would be (n * (n/2)) +n

So the unsorted method would work out at about a quarter or so quicker, but this assumes that I have understood the problem which is highly doubtful anyway    but I did enjoy trying to work it out (and I reckon my workings out will probably be wrong too)
myff admin

Good analysis, but the answer is really in this instance a lot easier.

We just read in all the data for all the dates and don't care about order, and then we chuck the new merged data out again diverting each bit into the right file for the date.

It will be quite trivial and quite probably less than 40 lines of code different to the code for the process that feeds into this one.
I always like code that closely copies other code. I will be scrapping the current code completely, as I did the code for that previous process.

The only funky thing about the whole exercise is that for speed it is vital that it pre-allocates itself enough memory, and since that cannot be guaranteed it must recognise itself running out of memory and be able to abort and try again with a better guess at its own memory needs. It will then learn from its mistake on one run and always allocate a bit more next time.
myff admin

Well okay it was more like 200 lines of changes.

But in 1200 lines of code, that does at least retain a very strong core of identity.

It is something to remember with code, if you do use the same techniques again and again, then anyone maintaining the code is at an advantage.
Bravo

This is quite fun and educational for anyone else learning python, you do have to learn stuff to solve the puzzles.  The first one was very easy, the second took me a while to get what it needed and I learned quite a bit trying to make a little translator program.  In the end an inherent function would have been much easier but I didn't know that.

http://www.pythonchallenge.com/
Bravo

Oh the memories:



Great little article, and brings back memories of hordes of us schooldchildren queuing up in the shops to 'have a go' on these machines  
myff admin

One thing I recall about that era was going into a "real" computer  shop which has just put the abysmal Vic20 in the window and getting somewhat blanked... a lot of people back then had simply not woken up to the way the wind was blowing, and that computers would be coming out of the office and into the home of teenage nerds.
Bravo

myff admin wrote:
teenage nerds


oi! I was only 12  

Gah those Vic20's were extravagant, 3k of memory on those, nobody needs that much
Bravo

Actually going back to the article about lazy coding (Clive Sinclair mentions it), it is true people are very wasteful it seems these days.

A for instance is I've been looking up about reading data from a file and to input it into a list.  On one forum someone was asking for a very similar method, and was told to use a certain module.  I looked up the module and by crikey it imported more modules, more functions, more code. Dozens of pages of it.  Now I don't know how the Python code ultimately assembles, maybe it is clever enough to ignore functions that aren't actually used I don't know (and wont be going that far into it at this stage anyway) so I might be wrong, it just seems that a few extra lines of code, and a new custom function or two would do the job relatively easily, or at least it seems to me that it would be easier than having to do all the research into what someone else has coded to find out which specific function of the module you are using and what parameters you need to use with it, and how the process of it works.

It's like a function to put the kettle on.  The path to the kettle is going to be different in everybodys house, so the instruction would have to account for all kinds of circumstances.  Then you have electric kettles or kettles straight on the hob, electric hob or gas hob and the instructions for those.  And so on....where if you just write it for your own program to specifically do what you need then it's going to be a lot easier and quicker.  I may be wrong, but at my current level of understanding this seems to be the case.
myff admin

Plainly a lot of languages don't cope with stripping out what they don't really need to use.

My example reduces the executable size from in rough figures 500kb to 20kb.

But even that is in some ways a shocking amount when you consider the "Chess" analogy.

So where does the 20kb go? Well at a guess I'm using "shared memory" for reporting functions that are not strictly needed, I have "verbose" modes for debugging that end up in the final version, I use a fancy output formatter  "printf" because it makes the code easier to write, I think that's the list of "crimes against byte space" there and it is not extensive.

But were I to actually work on it, that 20kb might be 5kb or even less.

Of course I am not going to do so, the need for speed was real, but the need to lose the 500kb is not actually important these days, let alone the need to reduce from 20kb.
Bravo

Indeed that's right, as the size of the executable code isn't really a problem, more if it's going through defunct processes, eg if it's running through data 100 times where once would suffice in case of some cases that wont crop up in the use that it's being used for on this occasion.

And even then 'speed only becomes a problem if it becomes a problem', ie if it takes 30 seconds to process where we could have got it down to 28.9 seconds clearly isn't worth it, but if it takes 7 hours when it could be done in 3 then it's worth a look (and even then if the extra four hours is actually a problem).

       myfreeforum.org Forum Index -> Off Topic
Page 1 of 1
Create your own free forum | Buy a domain to use with your forum