Django

Django django-admin.py startproject ImportError

27 Dec , 2014  

Suddenly i got this error in Windows when trying to start a new Django project :

1
django-admin.py startproject ImportError: No module named django.core

The solution :

1
python C:\Python27\Scripts\django-admin.py startproject example

ERP

Set Ofbiz using database PostgreSQL

27 Dec , 2014  

Here is a quickstep to set OfBiz running using PostgreqSQL

1. Install PostgreSQL Driver
Download http://jdbc.postgresql.org/download/postgresql-9.3-1102.jdbc4.jar and put it on “ofbiz\framework\entity\lib\jdbc”

2. Setup new account and database
In Pgadmin, right click on “Login Role” and create a new account “ofbiz” with password “ofbiz”.
Also create database called “ofbiz” and the owner is “ofbiz”. Make sure it’s have write permission.

3. Edit configuration
Edit “ofbiz\framework\entity\config” and edit delegator name “default”. Change it using “localpostnew” instead of “localhsql”. You can replace with the current configuration with below:

1
2
3
4
5
6
7
8
9
10
11
    <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="false">
        <group-map group-name="org.ofbiz" datasource-name="localpostnew"/>
        <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/>
        <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/>
        <!-- <group-map group-name="org.ofbiz" datasource-name="localmysql"/>
        <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>
        <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>  -->
        <!-- <group-map group-name="org.ofbiz" datasource-name="localpostnew"/>
        <group-map group-name="org.ofbiz.olap" datasource-name="localpostolap"/>
        <group-map group-name="org.ofbiz.tenant" datasource-name="localposttenant"/> -->
    </delegator>

Next step, we need to configure the PostgreSQL connection, here is my example :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
<datasource name="localpostgres"
            helper-class="org.ofbiz.entity.datasource.GenericHelperDAO"
            schema-name="public"
            field-type-name="postgres"
            check-on-start="true"
            add-missing-on-start="true"
            use-fk-initially-deferred="false"
            alias-view-columns="false"
            join-style="ansi"
            use-binary-type-for-blob="true"
            use-order-by-nulls="true">
            <!-- use this attribute to make the EntityListIterator more effective for pgjdbc 7.5devel and later:
                result-fetch-size="50"
            -->
        <read-data reader-name="tenant"/>
        <read-data reader-name="seed"/>
        <read-data reader-name="seed-initial"/>
        <read-data reader-name="demo"/>
        <read-data reader-name="ext"/>
        <read-data reader-name="ext-test"/>
        <read-data reader-name="ext-demo"/>
        <inline-jdbc
                jdbc-driver="org.postgresql.Driver"
                jdbc-uri="jdbc:postgresql://127.0.0.1:5432/ofbiz"
                jdbc-username="ofbiz"
                jdbc-password="ofbiz"
                isolation-level="ReadCommitted"
                pool-minsize="2"
                pool-maxsize="250"
                time-between-eviction-runs-millis="600000"/><!-- Be warned that at this date (2009-09-20) the max_connections parameters in postgresql.conf
                is set by default to 100 by the initdb process see http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html#GUC-MAX-CONNECTIONS-->

        <!-- <jndi-jdbc jndi-server-name="default" jndi-name="java:comp/env/jdbc/localpostgres" isolation-level="ReadCommitted"/>-->
        <!-- <jndi-jdbc jndi-server-name="default" jndi-name="comp/env/jdbc/xa/localpostgres" isolation-level="ReadCommitted"/> --> <!-- Orion Style JNDI name -->
        <!-- <jndi-jdbc jndi-server-name="localweblogic" jndi-name="PostgresDataSource"/> --> <!-- Weblogic Style JNDI name -->
        <!-- <jndi-jdbc jndi-server-name="default" jndi-name="jdbc/localpostgres" isolation-level="ReadCommitted"/> --> <!-- JRun4 Style JNDI name -->
        <!-- <tyrex-dataSource dataSource-name="localpostgres" isolation-level="ReadCommitted"/> -->
    </datasource>

    <!-- use localpostnew for NEW installations (don't switch from localpostgres) and for PostgreSQL
     at or above 8.1 (for more information see the comment in the fieldtype file "fieldtypepostnew") -->

    <datasource name="localpostnew"
        helper-class="org.ofbiz.entity.datasource.GenericHelperDAO"
        schema-name="public"
        field-type-name="postnew"
        check-on-start="true"
        add-missing-on-start="true"
        use-fk-initially-deferred="false"
        alias-view-columns="false"
        join-style="ansi"
        result-fetch-size="50"
        use-binary-type-for-blob="true"
        use-order-by-nulls="true">
        <read-data reader-name="tenant"/>
        <read-data reader-name="seed"/>
        <read-data reader-name="seed-initial"/>
        <read-data reader-name="demo"/>
        <read-data reader-name="ext"/>
        <read-data reader-name="ext-test"/>
        <read-data reader-name="ext-demo"/>
        <inline-jdbc
            jdbc-driver="org.postgresql.Driver"
            jdbc-uri="jdbc:postgresql://127.0.0.1:5432/ofbiz"
            jdbc-username="ofbiz"
            jdbc-password="ofbiz"
            isolation-level="ReadCommitted"
            pool-minsize="2"
            pool-maxsize="250"
            time-between-eviction-runs-millis="600000"/><!-- Be warned that at this date (2009-09-20) the max_connections parameters in postgresql.conf
                is set by default to 100 by the initdb process see http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html#GUC-MAX-CONNECTIONS-->
       
        <!-- <jndi-jdbc jndi-server-name="default" jndi-name="java:comp/env/jdbc/localpostgres" isolation-level="ReadCommitted"/>-->
        <!-- <jndi-jdbc jndi-server-name="default" jndi-name="comp/env/jdbc/xa/localpostgres" isolation-level="ReadCommitted"/> --> <!-- Orion Style JNDI name -->
        <!-- <jndi-jdbc jndi-server-name="localweblogic" jndi-name="PostgresDataSource"/> --> <!-- Weblogic Style JNDI name -->
        <!-- <jndi-jdbc jndi-server-name="default" jndi-name="jdbc/localpostgres" isolation-level="ReadCommitted"/> --> <!-- JRun4 Style JNDI name -->
        <!-- <tyrex-dataSource dataSource-name="localpostgres" isolation-level="ReadCommitted"/> -->
    </datasource>

Last step, we just execute “ant load-demo”

postgres

Create user in PostgreSQL interactive

27 Dec , 2014  

To create new user in PostgreSQL interactively :

1
2
sudo -i -u postgres
createuser <user-name> --interactive -P

Uncategorized

Install and setup Ofbiz in Windows Server

26 Dec , 2014  

1. Download and install Java JDK (I prefer 1.7) and set JAVA_HOME in Global Environment Variables into your jdk :
Example : C:\Program Files\Java\jdk1.7.0_45

2. Go to http://www.apache.org/dyn/closer.cgi/ofbiz/apache-ofbiz-13.07.01.zip and select mirror to download

3. Extract the package into your local folder

4. Open your command-prompt, and do :

1
2
ant load-demo
ant start

5. Now open https://127.0.0.1:8443/accounting/control/main

ERP

Install OpenERP in Windows Server / 8

26 Dec , 2014  

Here is a quickstep to install openerp in windows :

1. Download OpenERP
http://nightly.odoo.com/8.0/nightly/exe/odoo_8.0.latest.exe

2. Setup Postgresql Account
I assume we already have postgresql in our Windows. Go to PgAdmin -> Server -> Login Role.
Right click on Login Role and select New Login Role. Create a new account called “odoo” with create database permission.

3. Install Odoo
Don’t install PostgreSQL. When the database connection show up, just enter your PostgreSQL with user “odoo”.

4. Go to http://localhost:8069 and we can start to setup OpenERP

Ubuntu

How to rollback or remove last commit in remove Git

26 Dec , 2014  

This is common thing can happen when we push bad commit and want to rollback into previous commit. Please try this in your own branch before apply into production / master.

Try with “git log” :

1
2
3
4
5
6
7
8
9
10
11
commit 10a8e361b35dd8131cb1ba3e606aad35d0d0f267
Author: Somebody
Date:   Fri Dec 26 13:02:33 2014 +0700

     Bad commit here

commit f89150c864cd4725a25995efd233f5beab7fc25d
Author: Somebody 2
Date:   Fri Dec 26 12:21:37 2014 +0700

    Stable version

So, we want to rollback commit from top into bottom. The easy way to do is :

1
git push -f origin <your-stable-commit>:<your-branch>

Example:

1
git push -f origin f89150c864cd4725a25995efd233f5beab7fc25d:development

Then update your local by:

1
git reset --hard origin/development

If this happened in your local but not submitted in remote yet, then you can reset it by:

1
git reset --hard HEAD~<number-of-rollback>

Windows

Setup rsync in Windows

23 Dec , 2014  

Please go to http://www.rsync.net/resources/howto/windows_rsync.html and download the software.

Then you can add “C:\Program Files (x86)\cwRsync\bin” into your global variable path

Ubuntu

Install locale ID in Ubuntu 14.04

15 Dec , 2014  

Here is quick step to install locale in Ubuntu:

1
2
3
sudo locale-gen id_ID
sudo locale-gen id_ID.UTF-8
sudo update-locale

Server

Set Postgresql open for remote access in EC2

12 Dec , 2014  

To make PostgreSQL can be accessible from remote, first in EC2 security, please open your port “5432” (if your installation is using default port).

Next, we need to edit pg_hba.conf :

/etc/postgresql/9.3/main/pg_hba.conf and add :

1
host    all             all             0.0.0.0/0               md5

Restart your services and test by

1
psql -h server-ip -U username -d database_name

Django

Celery as daemon service in Ubuntu for production

11 Dec , 2014  

Here is a quickstep to set Celery as daemon in Ubuntu 14.04 for production:

Create file “/etc/init.d/celeryd” :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
#!/bin/sh -e
# ============================================
#  celeryd - Starts the Celery worker daemon.
# ============================================
#
# :Usage: /etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}
# :Configuration file: /etc/default/celeryd
#
# See http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts


### BEGIN INIT INFO
# Provides:          celeryd
# Required-Start:    $network $local_fs $remote_fs
# Required-Stop:     $network $local_fs $remote_fs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: celery task worker daemon
### END INIT INFO
#
#
# To implement separate init scripts, copy this script and give it a different
# name:
# I.e., if my new application, "little-worker" needs an init, I
# should just use:
#
#   cp /etc/init.d/celeryd /etc/init.d/little-worker
#
# You can then configure this by manipulating /etc/default/little-worker.
#
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
    echo "Error: This program can only be used by the root user."
    echo "       Unprivileged users must use the 'celery multi' utility, "
    echo "       or 'celery worker --detach'."
    exit 1
fi


# Can be a runlevel symlink (e.g. S02celeryd)
if [ -L "$0" ]; then
    SCRIPT_FILE=$(readlink "$0")
else
    SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"

DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"

CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}

# Make sure executable configuration script is owned by root
_config_sanity() {
    local path="$1"
    local owner=$(ls -ld "$path" | awk '{print $3}')
    local iwgrp=$(ls -ld "$path" | cut -b 6)
    local iwoth=$(ls -ld "$path" | cut -b 9)

    if [ "$(id -u $owner)" != "0" ]; then
        echo "Error: Config script '$path' must be owned by root!"
        echo
        echo "Resolution:"
        echo "Review the file carefully and make sure it has not been "
        echo "modified with mailicious intent.  When sure the "
        echo "script is safe to execute with superuser privileges "
        echo "you can change ownership of the script:"
        echo "    $ sudo chown root '$path'"
        exit 1
    fi

    if [ "$iwoth" != "-" ]; then  # S_IWOTH
        echo "Error: Config script '$path' cannot be writable by others!"
        echo
        echo "Resolution:"
        echo "Review the file carefully and make sure it has not been "
        echo "modified with malicious intent.  When sure the "
        echo "script is safe to execute with superuser privileges "
        echo "you can change the scripts permissions:"
        echo "    $ sudo chmod 640 '$path'"
        exit 1
    fi
    if [ "$iwgrp" != "-" ]; then  # S_IWGRP
        echo "Error: Config script '$path' cannot be writable by group!"
        echo
        echo "Resolution:"
        echo "Review the file carefully and make sure it has not been "
        echo "modified with malicious intent.  When sure the "
        echo "script is safe to execute with superuser privileges "
        echo "you can change the scripts permissions:"
        echo "    $ sudo chmod 640 '$path'"
        exit 1
    fi
}

if [ -f "$CELERY_DEFAULTS" ]; then
    _config_sanity "$CELERY_DEFAULTS"
    echo "Using config script: $CELERY_DEFAULTS"
    . "$CELERY_DEFAULTS"
fi

# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
    CELERY_APP_ARG="--app=$CELERY_APP"
fi

CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}

# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
    CELERYD_PID_FILE="$DEFAULT_PID_FILE"
    CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
    CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
    CELERY_CREATE_LOGDIR=1
fi

CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}

export CELERY_LOADER

if [ -n "$2" ]; then
    CELERYD_OPTS="$CELERYD_OPTS $2"
fi

CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`

# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
    DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi


check_dev_null() {
    if [ ! -c /dev/null ]; then
        echo "/dev/null is not a character device!"
        exit 75  # EX_TEMPFAIL
    fi
}


maybe_die() {
    if [ $? -ne 0 ]; then
        echo "Exiting: $* (errno $?)"
        exit 77  # EX_NOPERM
    fi
}

create_default_dir() {
    if [ ! -d "$1" ]; then
        echo "- Creating default directory: '$1'"
        mkdir -p "$1"
        maybe_die "Couldn't create directory $1"
        echo "- Changing permissions of '$1' to 02755"
        chmod 02755 "$1"
        maybe_die "Couldn't change permissions for $1"
        if [ -n "$CELERYD_USER" ]; then
            echo "- Changing owner of '$1' to '$CELERYD_USER'"
            chown "$CELERYD_USER" "$1"
            maybe_die "Couldn't change owner of $1"
        fi
        if [ -n "$CELERYD_GROUP" ]; then
            echo "- Changing group of '$1' to '$CELERYD_GROUP'"
            chgrp "$CELERYD_GROUP" "$1"
            maybe_die "Couldn't change group of $1"
        fi
    fi
}


check_paths() {
    if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
        create_default_dir "$CELERYD_LOG_DIR"
    fi
    if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
        create_default_dir "$CELERYD_PID_DIR"
    fi
}

create_paths() {
    create_default_dir "$CELERYD_LOG_DIR"
    create_default_dir "$CELERYD_PID_DIR"
}

export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"


_get_pidfiles () {
    # note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
    ${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}


_get_pids() {
    found_pids=0
    my_exitcode=0

    for pidfile in $(_get_pidfiles); do
        local pid=`cat "$pidfile"`
        local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
        if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
            echo "bad pid file ($pidfile)"
            one_failed=true
            my_exitcode=1
        else
            found_pids=1
            echo "$pid"
        fi

    if [ $found_pids -eq 0 ]; then
        echo "${SCRIPT_NAME}: All nodes down"
        exit $my_exitcode
    fi
    done
}


_chuid () {
    su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}


start_workers () {
    if [ ! -z "$CELERYD_ULIMIT" ]; then
        ulimit $CELERYD_ULIMIT
    fi
    _chuid $* start $CELERYD_NODES $DAEMON_OPTS     \
                 --pidfile="$CELERYD_PID_FILE"      \
                 --logfile="$CELERYD_LOG_FILE"      \
                 --loglevel="$CELERYD_LOG_LEVEL"    \
                 $CELERY_APP_ARG                    \
                 $CELERYD_OPTS
}


dryrun () {
    (C_FAKEFORK=1 start_workers --verbose)
}


stop_workers () {
    _chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}


restart_workers () {
    _chuid restart $CELERYD_NODES $DAEMON_OPTS      \
                   --pidfile="$CELERYD_PID_FILE"    \
                   --logfile="$CELERYD_LOG_FILE"    \
                   --loglevel="$CELERYD_LOG_LEVEL"  \
                   $CELERY_APP_ARG                  \
                   $CELERYD_OPTS
}


kill_workers() {
    _chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}


restart_workers_graceful () {
    echo "WARNING: Use with caution in production"
    echo "The workers will attempt to restart, but they may not be able to."
    local worker_pids=
    worker_pids=`_get_pids`
    [ "$one_failed" ] && exit 1

    for worker_pid in $worker_pids; do
        local failed=
        kill -HUP $worker_pid 2> /dev/null || failed=true
        if [ "$failed" ]; then
            echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
            one_failed=true
        else
            echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
        fi
    done

    [ "$one_failed" ] && exit 1 || exit 0
}


check_status () {
    my_exitcode=0
    found_pids=0

    local one_failed=
    for pidfile in $(_get_pidfiles); do
        if [ ! -r $pidfile ]; then
            echo "${SCRIPT_NAME} down: no pidfiles found"
            one_failed=true
            break
        fi

        local node=`basename "$pidfile" .pid`
        local pid=`cat "$pidfile"`
        local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
        if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
            echo "bad pid file ($pidfile)"
            one_failed=true
        else
            local failed=
            kill -0 $pid 2> /dev/null || failed=true
            if [ "$failed" ]; then
                echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
                one_failed=true
            else
                echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
            fi
        fi
    done

    [ "$one_failed" ] && exit 1 || exit 0
}


case "$1" in
    start)
        check_dev_null
        check_paths
        start_workers
    ;;

    stop)
        check_dev_null
        check_paths
        stop_workers
    ;;

    reload|force-reload)
        echo "Use restart"
    ;;

    status)
        check_status
    ;;

    restart)
        check_dev_null
        check_paths
        restart_workers
    ;;

    graceful)
        check_dev_null
        restart_workers_graceful
    ;;

    kill)
        check_dev_null
        kill_workers
    ;;

    dryrun)
        check_dev_null
        dryrun
    ;;

    try-restart)
        check_dev_null
        check_paths
        restart_workers
    ;;

    create-paths)
        check_dev_null
        create_paths
    ;;

    check-paths)
        check_dev_null
        check_paths
    ;;

    *)
        echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
        exit 64  # EX_USAGE
    ;;
esac

exit 0

Now make it executable :

1
sudo chmod a+x /etc/init.d/celeryd

Then create “/etc/default/celeryd” :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="yourprojectname"
# Where to chdir at start.
CELERYD_CHDIR="/path/to/your/project"

# Names of nodes to start
#   most will only start one node:
CELERYD_NODES="worker1"
#   but you can also start multiple and configure settings
#   for each in CELERYD_OPTS (see `celery multi --help` for examples).
CELERYD_NODES="worker"

# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"

# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"

# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"

# Workers should run as an unprivileged user.
#   You need to create this user manually (or you can choose
#   a user/group combination that already exists, e.g. nobody).
CELERYD_USER="ubuntu"
CELERYD_GROUP="ubuntu"

# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1

Then, we need to chown required folder :

1
2
sudo chown ubuntu -R /var/run/celery/
sudo chown ubuntu -R /var/log/celery

Now we ready to start the services by

1
sudo service celeryd start

You can see the log at “/var/log/celery/worker.log”

For Celerybeat, create file “/etc/init.d/celerybeat” :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
#!/bin/bash
# =========================================================
#  celerybeat - Starts the Celery periodic task scheduler.
# =========================================================
#
# :Usage: /etc/init.d/celerybeat {start|stop|force-reload|restart|try-restart|status}
# :Configuration file: /etc/default/celerybeat or /etc/default/celeryd
#
# See http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts

### BEGIN INIT INFO
# Provides:          celerybeat
# Required-Start:    $network $local_fs $remote_fs
# Required-Stop:     $network $local_fs $remote_fs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: celery periodic task scheduler
### END INIT INFO

# Cannot use set -e/bash -e since the kill -0 command will abort
# abnormally in the absence of a valid process ID.
#set -e
VERSION=10.1
echo "celery init v${VERSION}."

if [ $(id -u) -ne 0 ]; then
    echo "Error: This program can only be used by the root user."
    echo "       Unpriviliged users must use 'celery beat --detach'"
    exit 1
fi


# May be a runlevel symlink (e.g. S02celeryd)
if [ -L "$0" ]; then
    SCRIPT_FILE=$(readlink "$0")
else
    SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"

# /etc/init.d/celerybeat: start and stop the celery periodic task scheduler daemon.

# Make sure executable configuration script is owned by root
_config_sanity() {
    local path="$1"
    local owner=$(ls -ld "$path" | awk '{print $3}')
    local iwgrp=$(ls -ld "$path" | cut -b 6)
    local iwoth=$(ls -ld "$path" | cut -b 9)

    if [ "$(id -u $owner)" != "0" ]; then
        echo "Error: Config script '$path' must be owned by root!"
        echo
        echo "Resolution:"
        echo "Review the file carefully and make sure it has not been "
        echo "modified with mailicious intent.  When sure the "
        echo "script is safe to execute with superuser privileges "
        echo "you can change ownership of the script:"
        echo "    $ sudo chown root '$path'"
        exit 1
    fi

    if [ "$iwoth" != "-" ]; then  # S_IWOTH
        echo "Error: Config script '$path' cannot be writable by others!"
        echo
        echo "Resolution:"
        echo "Review the file carefully and make sure it has not been "
        echo "modified with malicious intent.  When sure the "
        echo "script is safe to execute with superuser privileges "
        echo "you can change the scripts permissions:"
        echo "    $ sudo chmod 640 '$path'"
        exit 1
    fi
    if [ "$iwgrp" != "-" ]; then  # S_IWGRP
        echo "Error: Config script '$path' cannot be writable by group!"
        echo
        echo "Resolution:"
        echo "Review the file carefully and make sure it has not been "
        echo "modified with malicious intent.  When sure the "
        echo "script is safe to execute with superuser privileges "
        echo "you can change the scripts permissions:"
        echo "    $ sudo chmod 640 '$path'"
        exit 1
    fi
}

scripts=""

if test -f /etc/default/celeryd; then
    scripts="/etc/default/celeryd"
    _config_sanity /etc/default/celeryd
    . /etc/default/celeryd
fi

EXTRA_CONFIG="/etc/default/${SCRIPT_NAME}"
if test -f "$EXTRA_CONFIG"; then
    scripts="$scripts, $EXTRA_CONFIG"
    _config_sanity "$EXTRA_CONFIG"
    . "$EXTRA_CONFIG"
fi

echo "Using configuration: $scripts"

CELERY_BIN=${CELERY_BIN:-"celery"}
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/beat.pid"
DEFAULT_LOG_FILE="/var/log/celery/beat.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_CELERYBEAT="$CELERY_BIN beat"

CELERYBEAT=${CELERYBEAT:-$DEFAULT_CELERYBEAT}
CELERYBEAT_LOG_LEVEL=${CELERYBEAT_LOG_LEVEL:-${CELERYBEAT_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}

# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
    CELERY_APP_ARG="--app=$CELERY_APP"
fi

CELERYBEAT_USER=${CELERYBEAT_USER:-${CELERYD_USER:-$DEFAULT_USER}}

# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYBEAT_PID_FILE" ]; then
    CELERYBEAT_PID_FILE="$DEFAULT_PID_FILE"
    CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYBEAT_LOG_FILE" ]; then
    CELERYBEAT_LOG_FILE="$DEFAULT_LOG_FILE"
    CELERY_CREATE_LOGDIR=1
fi

export CELERY_LOADER

CELERYBEAT_OPTS="$CELERYBEAT_OPTS -f $CELERYBEAT_LOG_FILE -l $CELERYBEAT_LOG_LEVEL"

if [ -n "$2" ]; then
    CELERYBEAT_OPTS="$CELERYBEAT_OPTS $2"
fi

CELERYBEAT_LOG_DIR=`dirname $CELERYBEAT_LOG_FILE`
CELERYBEAT_PID_DIR=`dirname $CELERYBEAT_PID_FILE`

# Extra start-stop-daemon options, like user/group.

CELERYBEAT_CHDIR=${CELERYBEAT_CHDIR:-$CELERYD_CHDIR}
if [ -n "$CELERYBEAT_CHDIR" ]; then
    DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYBEAT_CHDIR"
fi


export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"

check_dev_null() {
    if [ ! -c /dev/null ]; then
        echo "/dev/null is not a character device!"
        exit 75  # EX_TEMPFAIL
    fi
}

maybe_die() {
    if [ $? -ne 0 ]; then
        echo "Exiting: $*"
        exit 77  # EX_NOPERM
    fi
}

create_default_dir() {
    if [ ! -d "$1" ]; then
        echo "- Creating default directory: '$1'"
        mkdir -p "$1"
        maybe_die "Couldn't create directory $1"
        echo "- Changing permissions of '$1' to 02755"
        chmod 02755 "$1"
        maybe_die "Couldn't change permissions for $1"
        if [ -n "$CELERYBEAT_USER" ]; then
            echo "- Changing owner of '$1' to '$CELERYBEAT_USER'"
            chown "$CELERYBEAT_USER" "$1"
            maybe_die "Couldn't change owner of $1"
        fi
        if [ -n "$CELERYBEAT_GROUP" ]; then
            echo "- Changing group of '$1' to '$CELERYBEAT_GROUP'"
            chgrp "$CELERYBEAT_GROUP" "$1"
            maybe_die "Couldn't change group of $1"
        fi
    fi
}

check_paths() {
    if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
        create_default_dir "$CELERYBEAT_LOG_DIR"
    fi
    if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
        create_default_dir "$CELERYBEAT_PID_DIR"
    fi
}


create_paths () {
    create_default_dir "$CELERYBEAT_LOG_DIR"
    create_default_dir "$CELERYBEAT_PID_DIR"
}


wait_pid () {
    pid=$1
    forever=1
    i=0
    while [ $forever -gt 0 ]; do
        kill -0 $pid 1>/dev/null 2>&1
        if [ $? -eq 1 ]; then
            echo "OK"
            forever=0
        else
            kill -TERM "$pid"
            i=$((i + 1))
            if [ $i -gt 60 ]; then
                echo "ERROR"
                echo "Timed out while stopping (30s)"
                forever=0
            else
                sleep 0.5
            fi
        fi
    done
}


stop_beat () {
    echo -n "Stopping ${SCRIPT_NAME}... "
    if [ -f "$CELERYBEAT_PID_FILE" ]; then
        wait_pid $(cat "$CELERYBEAT_PID_FILE")
    else
        echo "NOT RUNNING"
    fi
}

_chuid () {
    su "$CELERYBEAT_USER" -c "$CELERYBEAT $*"
}

start_beat () {
    echo "Starting ${SCRIPT_NAME}..."
    _chuid $CELERY_APP_ARG $CELERYBEAT_OPTS $DAEMON_OPTS --detach \
                --pidfile="$CELERYBEAT_PID_FILE"
}


check_status () {
    local failed=
    local pid_file=$CELERYBEAT_PID_FILE
    if [ ! -e $pid_file ]; then
        echo "${SCRIPT_NAME} is up: no pid file found"
        failed=true
    elif [ ! -r $pid_file ]; then
        echo "${SCRIPT_NAME} is in unknown state, user cannot read pid file."
        failed=true
    else
        local pid=`cat "$pid_file"`
        local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
        if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
            echo "${SCRIPT_NAME}: bad pid file ($pid_file)"
            failed=true
        else
            local failed=
            kill -0 $pid 2> /dev/null || failed=true
            if [ "$failed" ]; then
                echo "${SCRIPT_NAME} (pid $pid) is down, but pid file exists!"
                failed=true
            else
                echo "${SCRIPT_NAME} (pid $pid) is up..."
            fi
        fi
    fi

    [ "$failed" ] && exit 1 || exit 0
}


case "$1" in
    start)
        check_dev_null
        check_paths
        start_beat
    ;;
    stop)
        check_paths
        stop_beat
    ;;
    reload|force-reload)
        echo "Use start+stop"
    ;;
    status)
        check_status
    ;;
    restart)
        echo "Restarting celery periodic task scheduler"
        check_paths
        stop_beat
        check_dev_null
        start_beat
    ;;
    create-paths)
        check_dev_null
        create_paths
    ;;
    check-paths)
        check_dev_null
        check_paths
    ;;
    *)
        echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|create-paths|status}"
        exit 64  # EX_USAGE
    ;;
esac

exit 0

Then make it executable by “sudo chmod a+x /etc/init.d/celerybeat”.

Now, setup configuration by create file “/etc/default/celerybeat” :

1
2
3
4
5
6
7
8
# Where to chdir at start.
CELERYBEAT_CHDIR="/project/path/"

# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"

# Name of the celery config module.#
CELERY_CONFIG_MODULE="celeryconfig"

And start celerybeat by :

1
sudo service celerybeat start

Uncategorized

Setup ERPNext in Ubuntu for Production

11 Dec , 2014  

https://github.com/webnotes/erpnext/wiki/WSGI-Production-Deployment

Uncategorized

Install OpenERP in Ubuntu

10 Dec , 2014  

To install open-erp in Ubuntu :

1
2
3
wget -c https://raw.githubusercontent.com/aschenkels-ictstudio/openerp-install-scripts/master/odoo-v8/ubuntu-14-04/odoo_install.sh
chmod a+x odoo_install.sh
./odoo_install.sh

Edit “/etc/odoo-server.conf” :

1
2
3
4
5
6
7
8
[options]
; This is the password that allows database operations:
; admin_passwd = admin
db_host = localhost
db_port = 5432
db_user = username
db_password = password
addons_path = /usr/lib/python2.7/dist-packages/openerp/addons

Make sure your add username and password PostgreSQL account that able to create database.

In NGINX, create virtualhost :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
upstream oddo {
    server 127.0.0.1:8069;
}

server {
    listen 80;
    server_name openerp.polatic.com;
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    location / {
        proxy_pass  http://oddo;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_redirect off;

        proxy_set_header    Host            $host;
        proxy_set_header    X-Real-IP       $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto https;
    }

    location ~* /web/static/ {
        proxy_cache_valid 200 60m;
        proxy_buffering on;
        expires 864000;
        proxy_pass http://oddo;
    }
}

Then start the service :

1
sudo service odoo-server start

Done!

Django

Install Django in Windows 8 / 7

9 Dec , 2014  

Here is quick step to setup Django in windows box :

1. Install Python
Please download https://www.python.org/ftp/python/2.7.7/python-2.7.7.msi.

2. Install Setuptools
http://www.lfd.uci.edu/~gohlke/pythonlibs/4jci5y59/setuptools-5.8.win32-py2.7.exe

3. Install PIP
http://www.lfd.uci.edu/~gohlke/pythonlibs/4jci5y59/pip-1.5.6.win32-py2.7.exe

4.Don’t forget to set global environment variables (depend on your Python path installation)
In PATH, please add :

1
C:\Python27\;C:\Python27\Scripts;C:\Python27\Lib\site-packages\django\bin;

5. Install Django

1
pip install django

6. Create new django project

1
python C:\Python27\Scripts\django-admin.py startproject yourproject

Ubuntu

Solve RabbitMQ ERROR: epmd error for host “ubuntu”: address (cannot connect to host/port)

1 Dec , 2014  

When installing RabbitMQ in server, I got this errors:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Preparing to unpack .../rabbitmq-server_3.2.4-1_all.deb ...                
Unpacking rabbitmq-server (3.2.4-1) ...                                    
yProcessing triggers for man-db (2.6.7.1-1) ...                            
Processing triggers for ureadahead (0.100.0-16) ...                        
Setting up rabbitmq-server (3.2.4-1) ...                                    
 * Starting message broker rabbitmq-server                                  
 * FAILED - check /var/log/rabbitmq/startup_\{log, _err\}                  
   ...fail!                                                                
invoke-rc.d: initscript rabbitmq-server, action "start" failed.            
dpkg: error processing package rabbitmq-server (--configure):              
 subprocess installed post-installation script returned error exit status 1
Processing triggers for ureadahead (0.100.0-16) ...                        
Errors were encountered while processing:                                  
 rabbitmq-server                                                            
E: Sub-process /usr/bin/dpkg returned an error code (1)                    
ubuntu@ubuntu:~$ cat /var/log/rabbitmq/startup_                            
startup_err  startup_log                                                    
ubuntu@ubuntu:~$ cat /var/log/rabbitmq/startup_log                          
ERROR: epmd error for host "ubuntu": address (cannot connect to host/port)  
ubuntu@ubuntu:~$ cat /var/log/rabbitmq/startup_log                          
ERROR: epmd error for host "ubuntu": address (cannot connect to host/port)

The solution, set localhost on “/etc/hosts”

1
2
3
4
5
6
7
8
9
127.0.0.1       localhost
192.168.0.1 ubuntu.ubuntu   ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Into

1
2
3
4
5
6
7
8
9
10
127.0.0.1       localhost
127.0.0.1       ubuntu
192.168.0.1 ubuntu.ubuntu   ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Django

Configure Django Celery, RabbitMQ in Windows and periodic task

1 Dec , 2014  

Here is guide to setup Django, Celery, RabbitMQ in Windows and running periodic task.

1. Install RabbitMQ
Go to http://www.rabbitmq.com/download.html, install and check if the services already running

2. Setup celery in your Django Project
Install django-celery modules

1
pip install django-celery

3. Example configuration for periodic task
In this example, i create tasks.py in app “schedules”:

1
2
3
4
5
6
7
8
9
10
11
12
from celery import Celery

app = Celery()

@app.task()  # Celery will ignore results and save memory
def update_student():
    """
    Generate Student data every minutes
    """

    updating_student_database()
    print("this executed!")

In Project/project (where settings.py located), create celery_configuration.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
from __future__ import absolute_import

import os

from celery import Celery

from django.conf import settings

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')

app = Celery('myproject')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

app.conf.update(
    CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend',
    CELERY_TASK_RESULT_EXPIRES=3600,
)

if __name__ == '__main__':
    app.start()

Then, last time in settings.py:

1
2
3
4
5
6
7
# Define which task that will executed in periodic task
CELERYBEAT_SCHEDULE = {
    'every-minute': {
        'task': 'schedules.tasks.update_student',
        'schedule': timedelta(seconds=10),
    },
}

Don’t forget to update your __init__.py (located in same folder with settings.py)

1
from myproject.celery_configuration import app as celery_app

Then, we just need to run “celeryd” and “celerybeat” to make periodic task working

1
2
python manage.py celery beat --app=myproject
python manage.py celeryd -l INFO

,