Tidbits @ Kassemi

A collection of opinions, thoughts, tricks and misc. information.

Sunday, October 29, 2006

 

Sample registration form

Heh. The framework's getting MUCH easier to use... And it's still not quite where I want it... Here's a quick little sample, a user registration process. This is the ENTIRE thing. Form validation, generation, database entry,
etc. We've even got a great captcha generator that handles them fairly transparently.

A focus on finishing up unit tests comes tomorrow after work... After that I'll be doing a separation between the server and framework, and then a few touchups and additions to the library and it's ready for the first alpha release... (it might be a while).


import kepty
import datetime
from kepty.lib.forms import *
from kepty.lib.scaffolding import validate_input

# Create a form...
class RegisterForm(FormSkeleton):
username = CharWidget(fv=Validator(min_length=2))
password = PassWidget(fv=Validator(field_match='password_confirm'))
password_confirm = PassWidget()
email = CharWidget(fv=Validator(is_email=True))
birthdate = DateTimeWidget(fv=Validator(max_age=18))
description = TextWidget(fv=Validator(min_length=2))
captcha = CaptchaWidget(fv=Validator(valid_captcha=True))

# Creates a quick template file... We delete this command as soon as it's been run.

RegisterForm().quick_start('/home/james/Projects/kepty/base/templates/tester/registration_form.tmpl')

@validate_input(error='same')
@kepty.expose(template='registration_form.tmpl')
def register(request, errored=False, registration=RegisterForm):
if registration == None:
return dict(RegisterForm=RegisterForm())

if(errored):
return dict(RegisterForm=registration)

user = kepty.lib.database.ORM.create('users')
user.update(registration.all())
user.sync()

return "Thanks! You've been registered, %s!" % user.username


application = kepty.make_app(
(r'^/register$', register)
)

kepty.mount('/tester', application, 'tester')


Thursday, October 26, 2006

 

lighttpd, apache, boa, thttpd benchmarks - A quick server for Kepty

This past week I made the realization that I probably shouldn't be writing a small, quick and stable server from scratch for the Kepty Web Framework project. Instead, I should be out there looking for one to use as a basis for the thing. Here's pretty much what I wanted:


  1. Fast and light

    When I'm writing software I like to write it once. Everything should work, and it should scale gracefully. This means the software should efficiently take the hardware as far as it can go, not the other way around. Sorry RoR, you're dismissed (currently).

  2. Stable

    I'll be using this framework for a number of production sites. It's got to be sturdy. There's nothing I hate more than restarting the server three times a day when it crashes suddenly.

  3. Simple

    Kepty is a framework written in python. C and C++ are monstrous for web programming, but python offers the perfect combination of speed and elegance for the job. The Python/C API isn't as clear as the language. Since I don't want to spend too much time with it, I'd like the server code to be simple and easily extended.

  4. Load-worthy

    Along with point #1, I like what I write to scale well. The server should handle load. The combined effects of a Slashdotting and Digg should barely register :)



In order to get this I started to look at a few frameworks. But first of all, the system:

System:




and the test description:

Test #1: 100,000 requests, 100 concurrent, screenshot.jpg <- 131KB
Test #2: 100,000 requests, 20 concurrent, starter.png <- 1.5KB

I wanted to get a test result for something that the server probably wouldn't get put through because bandwidth would get eaten up first, and then a test for a more real-world scenario. We're taking a look at these to replace the current server, so here are it's benchmarks:

Kepty Web Framework (current pure-python server)



I'll be replacing this server (which you can take a look at on the sourceforge project page) with a faster one... This is why we're doing the benchmarks...


Server Software: Kepty-1.0
Server Hostname: 127.0.0.1
Server Port: 2620

Document Path: /static/screenshot.jpg
Document Length: 133175 bytes

Concurrency Level: 100
Time taken for tests: 166.201361 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 13342500000 bytes
HTML transferred: 13317500000 bytes
Requests per second: 601.68 [#/sec] (mean)
Time per request: 166.201 [ms] (mean)
Time per request: 1.662 [ms] (mean, across all concurrent requests)
Transfer rate: 78397.59 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 92 696.0 0 45004
Processing: 1 39 1123.9 18 106328
Waiting: 0 39 1123.8 18 106327
Total: 5 132 1434.8 18 115331

Percentage of the requests served within a certain time (ms)
50% 18
66% 19
75% 20
80% 21
90% 23
95% 26
98% 3018
99% 3020
100% 115331 (longest request)


We bombed here... Concurrent users aren't handled fast enough, but we didn't do too badly :)


Benchmarking 127.0.0.1 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Finished 100000 requests


Server Software: Kepty-1.0
Server Hostname: 127.0.0.1
Server Port: 2620

Document Path: /static/starter.png
Document Length: 1478 bytes

Concurrency Level: 20
Time taken for tests: 74.292005 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 172500000 bytes
HTML transferred: 147800000 bytes
Requests per second: 1346.04 [#/sec] (mean)
Time per request: 14.858 [ms] (mean)
Time per request: 0.743 [ms] (mean, across all concurrent requests)
Transfer rate: 2267.50 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 4 186.6 0 21001
Processing: 0 9 31.8 8 3250
Waiting: 0 9 31.7 8 3250
Total: 0 13 190.7 8 21669

Percentage of the requests served within a certain time (ms)
50% 8
66% 9
75% 9
80% 9
90% 10
95% 10
98% 10
99% 13
100% 21669 (longest request)


The 21 second wait time on that longest request kind of gets to me... Can anyone out there explain what that actually means in the real world? But in any case, Kepty still outperforms pretty much every other pure-python server out there... It's just not good enough for me.

Now we get on to the fun stuff:

Apache



The powerhouse of the web world. Apache is an EXTREMELY popular server, and it's got the support of pretty much every professional system administrator out there. It's highly extensible with it's API, and it's just what I'm NOT looking for here (don't yell at me for the low scores, this is a specialized task, damnit!).

Speed/Lightness: 4/10 (2 tests avg 2594.07 Req/S)

Stability: 10/10

Simplicity: 3/10

Load-handling: 7/10



Server Software: Apache/1.3.37
Server Hostname: localhost
Server Port: 80

Document Path: /screenshot.jpg
Document Length: 133175 bytes

Concurrency Level: 100
Time taken for tests: 78.140494 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 13343133704 bytes
HTML transferred: 13318032700 bytes
Requests per second: 1279.75 [#/sec] (mean)
Time per request: 78.140 [ms] (mean)
Time per request: 0.781 [ms] (mean, across all concurrent requests)
Transfer rate: 166756.09 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.2 1 16
Processing: 1 76 6.5 77 116
Waiting: 0 9 13.0 5 75
Total: 7 77 6.7 79 118

Percentage of the requests served within a certain time (ms)
50% 79
66% 81
75% 82
80% 83
90% 84
95% 85
98% 88
99% 94
100% 118 (longest request)



Benchmarking localhost (be patient)


Server Software: Apache/1.3.37
Server Hostname: localhost
Server Port: 80

Document Path: /starter.png
Document Length: 1478 bytes

Concurrency Level: 20
Time taken for tests: 25.588280 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 172401724 bytes
HTML transferred: 147801478 bytes
Requests per second: 3908.04 [#/sec] (mean)
Time per request: 5.118 [ms] (mean)
Time per request: 0.256 [ms] (mean, across all concurrent requests)
Transfer rate: 6579.61 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.7 0 13
Processing: 1 4 0.8 4 17
Waiting: 0 1 0.8 2 14
Total: 1 4 1.1 5 21
WARNING: The median and mean for the waiting time are not within a normal deviation
These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
50% 5
66% 5
75% 5
80% 5
90% 6
95% 6
98% 6
99% 6
100% 21 (longest request)



lighttpd



This server continues to gain popularity, much due to the fact that RoR users find it a good way to set up when they're using fCGI. It's faster than Apache, simpler, and just as stable. It does aim, however, to maintain a full feature set, so it gets lower scores for simplicity.

The first time I've played around with lighttpd was today, and I'll have to say that I'm impressed. It maintained a higher footprint during the load than the next two servers (no surpise), and that does score against it, but the speed with which it handled the requests was amazing. If I had more time, or maybe more interest in something with so many features, I'd be more than willing to use this program. Heck, perhaps I'll make a source contribution to them in the future.

Speed/Lightness: 9/10 (2 tests avg 6985.48 Req/S)

Stability: 9/10

Simplicity: 5/10

Load-handling: 8/10



Benchmarking localhost (be patient)


Server Software: lighttpd/1.4.13
Server Hostname: localhost
Server Port: 2623

Document Path: /screenshot.jpg
Document Length: 133175 bytes

Concurrency Level: 100
Time taken for tests: 25.332170 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 13343233961 bytes
HTML transferred: 13318432225 bytes
Requests per second: 3947.55 [#/sec] (mean)
Time per request: 25.332 [ms] (mean)
Time per request: 0.253 [ms] (mean, across all concurrent requests)
Transfer rate: 514385.50 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.2 0 31
Processing: 7 24 3.0 24 51
Waiting: 0 1 1.5 2 30
Total: 8 24 3.4 24 65

Percentage of the requests served within a certain time (ms)
50% 24
66% 24
75% 25
80% 25
90% 25
95% 27
98% 40
99% 44
100% 65 (longest request)



Server Software: lighttpd/1.4.13
Server Hostname: localhost
Server Port: 2623

Document Path: /starter.png
Document Length: 1478 bytes

Concurrency Level: 20
Time taken for tests: 9.976645 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 172200000 bytes
HTML transferred: 147800000 bytes
Requests per second: 10023.41 [#/sec] (mean)
Time per request: 1.995 [ms] (mean)
Time per request: 0.100 [ms] (mean, across all concurrent requests)
Transfer rate: 16855.77 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 12
Processing: 0 1 0.8 1 19
Waiting: 0 0 0.5 0 17
Total: 0 1 0.9 1 19

Percentage of the requests served within a certain time (ms)
50% 1
66% 2
75% 2
80% 2
90% 2
95% 2
98% 3
99% 4
100% 19 (longest request)




boa


Speed/Lightness: 7/10 (2 tests avg 4598.73 Req/S)

Stability: 9/10

Simplicity: 7/10

Load-handling: 7/10


According to it's wikipedia entry the boa server is used for serving up images on slashdot. This wouldn't surprise me, other than the fact that thttpd beat it out pretty badly, even with the small files. Without HTTP/1.1 connection support though, thttpd might not have gotten that far. Cruel world, but boa hasn't had a release in a while (latest is 2002), and it's about time it gets an update. The code base is small, but not very well commented, which scores against it in simplicity.


Benchmarking localhost (be patient)


Server Software: Boa/0.94.13
Server Hostname: localhost
Server Port: 2622

Document Path: /screenshot.jpg
Document Length: 133175 bytes

Concurrency Level: 100
Time taken for tests: 99.204866 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 13336700000 bytes
HTML transferred: 13317500000 bytes
Requests per second: 1008.02 [#/sec] (mean)
Time per request: 99.205 [ms] (mean)
Time per request: 0.992 [ms] (mean, across all concurrent requests)
Transfer rate: 131285.11 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 8
Processing: 41 98 12.6 91 189
Waiting: 0 2 1.1 2 28
Total: 41 98 12.6 91 189

Percentage of the requests served within a certain time (ms)
50% 91
66% 109
75% 111
80% 111
90% 114
95% 118
98% 123
99% 133
100% 189 (longest request)



Benchmarking localhost (be patient)


Server Software: Boa/0.94.13
Server Hostname: localhost
Server Port: 2622

Document Path: /starter.png
Document Length: 1478 bytes

Concurrency Level: 20
Time taken for tests: 12.210854 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 166806672 bytes
HTML transferred: 147805912 bytes
Requests per second: 8189.44 [#/sec] (mean)
Time per request: 2.442 [ms] (mean)
Time per request: 0.122 [ms] (mean, across all concurrent requests)
Transfer rate: 13340.34 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.8 0 19
Processing: 0 1 1.2 2 20
Waiting: 0 0 0.8 0 18
Total: 0 1 1.6 2 23

Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 2
90% 4
95% 4
98% 5
99% 6
100% 23 (longest request)




thttpd


Speed/Lightness: 7/10 (2 tests avg 5364.12 Req/S)

Stability: 7/10

Simplicity: 9/10

Load-handling: 9/10


thttpd is a wonderfully written little server. The code is well documented and there isn't too much of it. It supports the base HTTP/1.1 standard, unlike boa (HTTP/1.0), which leaves room to expand it. It handled the 100 vs 20 concurrency very well,so I give it a 10 in the load-handling list (a thoroughput of around 237,000 KB/sec is more than enough to make the bottleneck your bandwidth). It puts quite a bit into a very small package...


Server Software: thttpd/2.25b
Server Hostname: localhost
Server Port: 2621

Document Path: /screenshot.jpg
Document Length: 133175 bytes

Concurrency Level: 100
Time taken for tests: 54.815574 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 13340133400 bytes
HTML transferred: 13317633175 bytes
Requests per second: 1824.30 [#/sec] (mean)
Time per request: 54.816 [ms] (mean)
Time per request: 0.548 [ms] (mean, across all concurrent requests)
Transfer rate: 237660.08 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.1 1 28
Processing: 5 52 11.7 61 94
Waiting: 0 3 2.1 3 38
Total: 5 54 11.9 63 96

Percentage of the requests served within a certain time (ms)
50% 63
66% 63
75% 64
80% 64
90% 64
95% 65
98% 67
99% 71
100% 96 (longest request)



Benchmarking localhost (be patient)


Server Software: thttpd/2.25b
Server Hostname: localhost
Server Port: 2621

Document Path: /starter.png
Document Length: 1478 bytes

Concurrency Level: 20
Time taken for tests: 11.230982 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 170000000 bytes
HTML transferred: 147800000 bytes
Requests per second: 8903.94 [#/sec] (mean)
Time per request: 2.246 [ms] (mean)
Time per request: 0.112 [ms] (mean, across all concurrent requests)
Transfer rate: 14781.88 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.5 0 12
Processing: 0 1 0.9 1 14
Waiting: 0 0 1.0 1 13
Total: 0 1 1.2 1 18

Percentage of the requests served within a certain time (ms)
50% 1
66% 2
75% 2
80% 2
90% 3
95% 4
98% 5
99% 5
100% 18 (longest request)


The Results



Apache: 24/40 (60%)
lighttpd: 31/40 (75.5%)
boa: 30/40 (75%)
thttpd: 32/40 (80%)

For this purpose thttpd looks like it's the winner. This is an open source project though, and I would like to know what the community thinks should be done first. Are there other systems you think I should benchmark? Was I wrong about something that I should adjust for? I'll start the project sometime next week, so give me input!

James

Saturday, October 07, 2006

 

Fluxbox .9 to 1.0rc2

I just made the switch from a development .9 version of Fluxbox to the second release candidate for v 1.0. Nicely done Fluxbox developers! Everything is very responsive and I'm glad you brought back the external tabs. I'm happy.

But, for those of you who set up your Fluxbox configuration a while ago, and happened to modify the placement of your minimize, maximize and close buttons, it's a shock to see them reset to the standard Windows positions. In any case, the configuration settings are in your ~/.fluxbox/init file:


session.screen0.titlebar.left: Close Minimize Maximize
session.screen0.titlebar.right: Stick


The above are adjusted for my preferences (as if that wasn't obvious) so change them to your preferred setting.

Take it easy all.

Monday, October 02, 2006

 

Kepty WAF scaffolding

The Kepty web application framework got a new addition today... Scaffolding. It's been planned for a while, but I was trying to avoid it until all the other pieces were getting together properly. Database-generated forms are great, but they also took too much away from the user end. The forms generated by the scaffold are code-based, meaning you can modify them as you please. With my current working application I have a database with a table named "posts." I simply use this command:


kepty scaffold posts


And I get an application for modifying and browsing the posts table. It's mounted as name: posts, location: /posts
automatically. Browse to http://serveraddress/posts and you get this:



It's a start. My intentions for the scaffolding process with the kepty WAF are to keep something that can actually be used in production that's professional and easily modified (code generation at a low level instead of something like form(table), although possible). In any case, take a look at the sourceforge project page for more.

Archives

August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006  

This page is powered by Blogger. Isn't yours?