mirror of
https://gitlab.archlinux.org/archlinux/aurweb.git
synced 2025-02-03 10:43:03 +01:00
removed tupkg. community management moved to main repos.
Signed-off-by: Loui Chang <louipc.ist@gmail.com>
This commit is contained in:
parent
6caf3a4da0
commit
94b1e165e1
11 changed files with 0 additions and 2526 deletions
|
@ -1,99 +0,0 @@
|
|||
TU Packaging Tools (tupkg)
|
||||
--------------------------
|
||||
- client side (python for proof of concept, later re-write to C?)
|
||||
The main purpose of this tool is to upload the compiled
|
||||
pkg.tar.gz to the server. It can (should?) do some verification
|
||||
on the package prior to uploading to the server. It will have
|
||||
a config file to store run-time information such as username
|
||||
(email), password, and server name.
|
||||
|
||||
- server side (python for proof of concept, later re-write to C?)
|
||||
The server side will handle incoming connections from its client
|
||||
side counterpart. The server should bind to port 80 (maybe a
|
||||
vhost such as tupkg.archlinux.org?) so that firewalls won't be
|
||||
an issue. The server verifies the client authentication data,
|
||||
and then accepts the package(s). If port 80 is not available,
|
||||
perhaps 443, or are there other 'standard' ports that usually
|
||||
do not get filtered?
|
||||
|
||||
I think the server should be multithreaded to handle simultaneous
|
||||
uploads rather than queue up requests. The download should be
|
||||
stored in a temp directory based on the username to prevent
|
||||
directory, filename clashes.
|
||||
|
||||
Once the package(s) is uploaded, the server can either kick off
|
||||
a gensync, or we can write a separate script to call gensync once
|
||||
or twice a day. My preference would be a separate script to call
|
||||
gensync (like the *NIX philosophy of one tool per task).
|
||||
|
||||
- protocol (c: => client, s: => server)
|
||||
Whenever the client/server exchange a message, it is always
|
||||
preceeded by two-bytes representing the following message's
|
||||
length. For example, when the client connects, it will send:
|
||||
|
||||
0x0028username=bfinch@example.net&password=B0b
|
||||
|
||||
0x0028 is the 40 byte strlen of the message in two-bytes. The
|
||||
client and server always read two-bytes from the socket, and
|
||||
then know how much data is coming and can read that amount of
|
||||
bytes from the socket.
|
||||
|
||||
==> authentication
|
||||
c: username=emailaddy&password=mypassword
|
||||
s: result=PASS|FAIL
|
||||
|
||||
NOTE: We can add encryption easily enough with the python
|
||||
version using the socket.ssl method.
|
||||
|
||||
==> uploading package data
|
||||
if PASS:
|
||||
|
||||
c: numpkgs=2&name1=p1.pkg.tar.gz&size1=123&md5sum1=abcd\
|
||||
name2=p2.pkg.tar.gz&size2=3&md5sum2=def1
|
||||
s: numpkgs=2&name1=p1.pkg.tar.gz&size1=119&\
|
||||
name2=p2.pkg.tar.gz&size2=0 (*)
|
||||
|
||||
(*) NOTE: The server will reply back to the client how many
|
||||
packages it has already received and its local file size.
|
||||
This way, the client can resume an upload. In the example
|
||||
above, the server still needs the last four (123-119) bytes
|
||||
for the first package, and that it has no part of the
|
||||
second package. The client would then begin sending the
|
||||
last four bytes from the first package (p1.pkg.tar.gz) and
|
||||
then follow it with the full second package (p2.pkg.tar.gz).
|
||||
The data would be sent as a continuous chunk of data. The
|
||||
server will then need to track which bytes belong to which
|
||||
package.
|
||||
|
||||
else FAIL:
|
||||
c: -spits out error message on stderr to user-
|
||||
|
||||
|
||||
==> after upload completes
|
||||
The server should verify the integrity of the uploaded packages
|
||||
by doing an md5sum on each and sending the info back to the client
|
||||
for comparison. After sending the message, the server waits for
|
||||
the 'ack' message from the client and then closes the connection.
|
||||
|
||||
s: np=2&m1=PASS&m2=FAIL
|
||||
c: ack
|
||||
|
||||
The client replies with the 'ack' and then closes its connection
|
||||
to the server. It then reports the PASS/FAIL status of each
|
||||
package's upload attempt.
|
||||
|
||||
NOTE: If the upload fails (client connection dies), the server
|
||||
keeps any data it has received in order to support resuming an
|
||||
upload. However, if the client uploads all data, and the server
|
||||
successully reads all data and the final MD5 fails, the server
|
||||
deletes the failed package.
|
||||
|
||||
|
||||
Terms/definitions:
|
||||
======================
|
||||
TU - No change (trusted by the community, if anyone asks what trust
|
||||
means)
|
||||
TUR - renamed to Arch User-community Repo (AUR) (so we can use -u for
|
||||
versions)
|
||||
Incoming - renamed to "Unsupported"
|
||||
|
|
@ -1,132 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Source makepkg.conf; fail if it is not found
|
||||
if [ -r "/etc/makepkg.conf" ]; then
|
||||
source "/etc/makepkg.conf"
|
||||
else
|
||||
echo "/etc/makepkg.conf not found!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Source user-specific makepkg.conf overrides
|
||||
if [ -r ~/.makepkg.conf ]; then
|
||||
source ~/.makepkg.conf
|
||||
fi
|
||||
|
||||
cmd=`basename $0`
|
||||
|
||||
if [ ! -f PKGBUILD ]; then
|
||||
echo "No PKGBUILD file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# define tags and staging areas based on architecture
|
||||
if [ "$CARCH" = "i686" ]; then
|
||||
currenttag='CURRENT'
|
||||
testingtag='TESTING'
|
||||
suffix=''
|
||||
elif [ "$CARCH" = "x86_64" ]; then
|
||||
currenttag='CURRENT-64'
|
||||
testingtag='TESTING-64'
|
||||
suffix='64'
|
||||
else
|
||||
echo "CARCH must be set to a recognized value!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source PKGBUILD
|
||||
pkgfile=${pkgname}-${pkgver}-${pkgrel}-${CARCH}.pkg.tar.gz
|
||||
oldstylepkgfile=${pkgname}-${pkgver}-${pkgrel}.pkg.tar.gz
|
||||
|
||||
if [ ! -f $pkgfile ]; then
|
||||
if [ -f $PKGDEST/$pkgfile ]; then
|
||||
pkgfile=$PKGDEST/$pkgfile
|
||||
oldstylepkgfile=$PKGDEST/$oldstylepkgfile
|
||||
elif [ -f $oldstylepkgfile ]; then
|
||||
pkgfile=$oldstylepkgfile
|
||||
elif [ -f $PKGDEST/$oldstylepkgfile ]; then
|
||||
pkgfile=$PKGDEST/$oldstylepkgfile
|
||||
else
|
||||
echo "File $pkgfile doesn't exist"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$cmd" == "extrapkg" ]; then
|
||||
repo="extra"
|
||||
tag="$currenttag"
|
||||
elif [ "$cmd" == "corepkg" ]; then
|
||||
repo="core"
|
||||
tag="$currenttag"
|
||||
elif [ "$cmd" == "testingpkg" ]; then
|
||||
repo="testing"
|
||||
tag="$testingtag"
|
||||
elif [ "$cmd" == "unstablepkg" ]; then
|
||||
repo="unstable"
|
||||
tag="$currenttag"
|
||||
elif [ "$cmd" == "communitypkg" ]; then
|
||||
repo="community"
|
||||
tag="$currenttag"
|
||||
fi
|
||||
|
||||
# see if any limit options were passed, we'll send them to SCP
|
||||
unset scpopts
|
||||
if [ "$1" = "-l" ]; then
|
||||
scpopts="$1 $2"
|
||||
shift 2
|
||||
fi
|
||||
|
||||
if [ "$repo" != "community" ]; then
|
||||
# combine what we know into a variable (suffix defined based on $CARCH)
|
||||
uploadto="staging/${repo}${suffix}/add/$(basename ${pkgfile})"
|
||||
scp ${scpopts} "${pkgfile}" "archlinux.org:${uploadto}"
|
||||
if [ "$(md5sum "${pkgfile}" | cut -d' ' -f1)" != "$(ssh archlinux.org md5sum "${uploadto}" | cut -d' ' -f1)" ]; then
|
||||
echo "File got corrupted during upload, cancelled."
|
||||
exit 1
|
||||
else
|
||||
echo "File integrity okay."
|
||||
fi
|
||||
else
|
||||
if [ ! -f ~/.tupkg ]; then
|
||||
echo "Must configure tupkg via ~/.tupkg, cancelled"
|
||||
exit 1
|
||||
fi
|
||||
if [ "$(basename $pkgfile)" != "$(basename $oldstylepkgfile)" ]; then
|
||||
echo "Renaming makepkg3 package for compatability"
|
||||
mv $pkgfile $oldstylepkgfile
|
||||
pkgfile=$oldstylepkgfile
|
||||
fi
|
||||
tupkg $pkgfile
|
||||
fi
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Cancelled"
|
||||
exit 1
|
||||
fi
|
||||
echo "===> Uploaded $pkgfile"
|
||||
|
||||
if [ "$1" != "" ]; then
|
||||
cvs commit -m "upgpkg: $pkgname $pkgver-$pkgrel
|
||||
$1" > /dev/null
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Cancelled"
|
||||
exit 1
|
||||
fi
|
||||
echo "===> Commited with \"upgpkg: $pkgname $pkgver-$pkgrel
|
||||
$1\" message"
|
||||
else
|
||||
cvs commit -m "upgpkg: $pkgname $pkgver-$pkgrel" > /dev/null
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Cancelled"
|
||||
exit 1
|
||||
fi
|
||||
echo "===> Commited with \"upgpkg: $pkgname $pkgver-$pkgrel\" message"
|
||||
fi
|
||||
|
||||
cvs tag -c -F -R $tag > /dev/null
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Cancelled"
|
||||
exit 1
|
||||
fi
|
||||
echo "===> Tagged as $tag"
|
||||
|
||||
# vim:ft=sh:ts=4:sw=4:et:
|
|
@ -1,216 +0,0 @@
|
|||
#!/usr/bin/python -O
|
||||
#
|
||||
# Description:
|
||||
# ------------
|
||||
# This is the client-side portion of the Trusted User package
|
||||
# manager. The TUs will use this program to upload packages into
|
||||
# the AUR. For more information, see the ../README.txt file.
|
||||
#
|
||||
# Python Indentation:
|
||||
# -------------------
|
||||
# For a vim: line to be effective, it must be at the end of the
|
||||
# file. See the end of the file for more information.
|
||||
#
|
||||
|
||||
import sys
|
||||
import socket
|
||||
import os
|
||||
import struct
|
||||
import os.path
|
||||
import cgi
|
||||
import urllib
|
||||
import getopt
|
||||
import ConfigParser
|
||||
|
||||
from hashlib import md5
|
||||
|
||||
class ClientFile:
|
||||
def __init__(self, pathname):
|
||||
self.pathname = pathname
|
||||
self.filename = os.path.basename(pathname)
|
||||
self.fd = open(pathname, "rb")
|
||||
self.fd.seek(0, 2)
|
||||
self.size = self.fd.tell()
|
||||
self.fd.seek(0)
|
||||
self.makeMd5()
|
||||
|
||||
def makeMd5(self):
|
||||
md5sum = md5()
|
||||
while self.fd.tell() != self.size:
|
||||
md5sum.update(self.fd.read(1024))
|
||||
self.md5 = md5sum.hexdigest()
|
||||
|
||||
class ClientSocket:
|
||||
def __init__(self, files, host, port, username, password):
|
||||
self.files = files
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.username = username
|
||||
self.password = password
|
||||
|
||||
def connect(self):
|
||||
self.socket.connect((self.host, self.port))
|
||||
|
||||
def reliableRead(self, size):
|
||||
totalread = ""
|
||||
while len(totalread) < size:
|
||||
read = self.socket.recv(size-len(totalread))
|
||||
if read == 0:
|
||||
raise RuntimeError, "socket connection broken"
|
||||
totalread += read
|
||||
return totalread
|
||||
|
||||
def sendMsg(self, msg):
|
||||
if type(msg) == dict:
|
||||
msg = urllib.urlencode(msg,1)
|
||||
length = struct.pack("H", socket.htons(len(msg)))
|
||||
self.socket.sendall(length)
|
||||
self.socket.sendall(msg)
|
||||
|
||||
def readMsg(self, format=0):
|
||||
initsize = self.reliableRead(2)
|
||||
(length,) = struct.unpack("H", initsize)
|
||||
length = socket.ntohs(length)
|
||||
data = self.reliableRead(length)
|
||||
if format == 1:
|
||||
qs = cgi.parse_qs(data)
|
||||
return qs
|
||||
else:
|
||||
return data
|
||||
|
||||
def close(self):
|
||||
self.socket.close()
|
||||
|
||||
def auth(self):
|
||||
msg = {'username': self.username, 'password': self.password}
|
||||
self.sendMsg(msg)
|
||||
reply = self.readMsg(1)
|
||||
if reply['result'] == ["PASS"]:
|
||||
return 1
|
||||
elif reply['result'] == ["SQLERR"]:
|
||||
print "SQL server-side error"
|
||||
return 0
|
||||
else:
|
||||
return 0
|
||||
|
||||
def sendFileMeta(self):
|
||||
msg = {'numpkgs': len(self.files)}
|
||||
for i, v in enumerate(self.files):
|
||||
msg['name'+str(i)] = v.filename
|
||||
msg['size'+str(i)] = v.size
|
||||
msg['md5sum'+str(i)] = v.md5
|
||||
self.sendMsg(msg)
|
||||
reply = self.readMsg(1)
|
||||
print reply
|
||||
for i in reply:
|
||||
if i[:4] == 'size':
|
||||
self.files[int(i[4:])].cur_done = int(reply[i][0])
|
||||
|
||||
def sendFiles(self):
|
||||
for i in self.files:
|
||||
i.fd.seek(i.cur_done)
|
||||
print "Uploading:", i.filename, str(i.size/1024), "kb"
|
||||
sdone = 0
|
||||
while i.fd.tell() < i.size:
|
||||
sdone+=1
|
||||
self.socket.sendall(i.fd.read(1024))
|
||||
if sdone % 100 == 0:
|
||||
print "\r",
|
||||
print str(sdone), "of", str(i.size/1024), "kb",
|
||||
sys.stdout.flush()
|
||||
reply = self.readMsg(1)
|
||||
print reply
|
||||
self.sendMsg("ack")
|
||||
|
||||
def usage():
|
||||
print "usage: tupkg [options] <package file>"
|
||||
print "options:"
|
||||
print " -u, --user Connect with username"
|
||||
print " -P, --password Connect with password"
|
||||
print " -h, --host Connect to host"
|
||||
print " -p, --port Connect to host on port (default 1034)"
|
||||
print "May also use conf file: ~/.tupkg"
|
||||
|
||||
def main(argv=None):
|
||||
if argv is None:
|
||||
argv = sys.argv
|
||||
|
||||
confdict = {}
|
||||
conffile = os.path.join(os.getenv("HOME"),".tupkg") #try the standard location
|
||||
#Set variables from file now, may be overridden on command line
|
||||
if os.path.isfile(conffile):
|
||||
config = ConfigParser.ConfigParser()
|
||||
config.read(conffile)
|
||||
confdict['user'] = config.get('tupkg','username')
|
||||
confdict['password'] = config.get('tupkg','password')
|
||||
try:
|
||||
confdict['host'] = config.get('tupkg','host')
|
||||
except:
|
||||
confdict['host'] = 'aur.archlinux.org'
|
||||
try:
|
||||
confdict['port'] = config.getint('tupkg','port')
|
||||
except:
|
||||
confdict['port'] = 1034
|
||||
else:
|
||||
confdict['user'] = ""
|
||||
confdict['password'] = ""
|
||||
confdict['host'] = 'aur.archlinux.org'
|
||||
confdict['port'] = 1034
|
||||
if len(argv) == 1: #no config file and no args, bail
|
||||
usage()
|
||||
return 1
|
||||
|
||||
try:
|
||||
optlist, args = getopt.getopt(argv[1:], "u:P:h:p:", ["user=", "password=", "host=", "port="])
|
||||
except getopt.GetoptError:
|
||||
usage()
|
||||
return 1
|
||||
|
||||
for i, k in optlist:
|
||||
if i in ('-u', '--user'):
|
||||
confdict['user'] = k
|
||||
if i in ('-P', '--password'):
|
||||
confdict['password'] = k
|
||||
if i in ('-h', '--host'):
|
||||
confdict['host'] = k
|
||||
if i in ('-p', '--port'):
|
||||
confdict['port'] = int(k)
|
||||
|
||||
files = []
|
||||
for i in args:
|
||||
try:
|
||||
files.append(ClientFile(i))
|
||||
except IOError, err:
|
||||
print "Error: " + err.strerror + ": '" + err.filename + "'"
|
||||
usage()
|
||||
return 1
|
||||
|
||||
cs = ClientSocket(files, confdict['host'], confdict['port'], confdict['user'], confdict['password'])
|
||||
try:
|
||||
cs.connect()
|
||||
|
||||
if not cs.auth():
|
||||
print "Error authenticating you, you bastard"
|
||||
return 1
|
||||
|
||||
cs.sendFileMeta()
|
||||
|
||||
cs.sendFiles()
|
||||
|
||||
cs.close()
|
||||
except KeyboardInterrupt:
|
||||
print "Cancelling"
|
||||
cs.close()
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
|
||||
# Python Indentation:
|
||||
# -------------------
|
||||
# Use tabs not spaces. If you use vim, the following comment will
|
||||
# configure it to use tabs.
|
||||
#
|
||||
# vim:noet:ts=2 sw=2 ft=python
|
|
@ -1,4 +0,0 @@
|
|||
#!/bin/bash
|
||||
aurroot=/srv/http/aur
|
||||
|
||||
nohup $aurroot/aur/tupkg/server/tupkgs -c $aurroot/tupkgs.conf > $aurroot/tupkgs.log 2>&1 &
|
|
@ -1,19 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
aurroot=/srv/http/aur
|
||||
|
||||
# Set HOME for correct cvs auth.
|
||||
HOME=$aurroot
|
||||
|
||||
echo "--------------------"
|
||||
date
|
||||
|
||||
# Update the CVS tree.
|
||||
# Filter out useless output.
|
||||
cd $aurroot/cvs
|
||||
echo "Updating CVS..."
|
||||
cvs update -dP 2>&1 | grep -v "Updating"
|
||||
|
||||
# tupkgupdate <repodir> <cvsdir> <incomingdir>
|
||||
$aurroot/aur/tupkg/update/tupkgupdate -c $aurroot/tupkgs.conf --delete --paranoid /srv/ftp/community/os/i686 $aurroot/cvs $aurroot/packages/full
|
||||
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
killall tupkgs
|
|
@ -1,329 +0,0 @@
|
|||
#!/usr/bin/python -O
|
||||
#
|
||||
# Description:
|
||||
# ------------
|
||||
# This is the server-side portion of the Trusted User package
|
||||
# manager. This program will receive uploads from its client-side
|
||||
# couterpart, tupkg. Once a package is received and verified, it
|
||||
# is placed in a specified temporary incoming directory where
|
||||
# a separate script will handle migrating it to the AUR. For
|
||||
# more information, see the ../README.txt file.
|
||||
#
|
||||
# Python Indentation:
|
||||
# -------------------
|
||||
# For a vim: line to be effective, it must be at the end of the
|
||||
# file. See the end of the file for more information.
|
||||
|
||||
import sys
|
||||
import socket
|
||||
import threading
|
||||
import select
|
||||
import struct
|
||||
import cgi
|
||||
import urllib
|
||||
import MySQLdb
|
||||
import MySQLdb.connections
|
||||
import ConfigParser
|
||||
import getopt
|
||||
import os.path
|
||||
import os
|
||||
import time
|
||||
from hashlib import md5
|
||||
|
||||
CONFIGFILE = '/etc/tupkgs.conf'
|
||||
|
||||
config = ConfigParser.ConfigParser()
|
||||
|
||||
class ClientFile:
|
||||
def __init__(self, filename, actual_size, actual_md5):
|
||||
self.pathname = os.path.join(confdict['incomingdir'], filename)
|
||||
self.filename = filename
|
||||
self.fd = open(self.pathname, "a+b")
|
||||
self.actual_size = actual_size
|
||||
self.actual_md5 = actual_md5
|
||||
self.getSize()
|
||||
self.orig_size = self.size
|
||||
|
||||
def getSize(self):
|
||||
cur = self.fd.tell()
|
||||
self.fd.seek(0,2)
|
||||
self.size = self.fd.tell()
|
||||
self.fd.seek(cur)
|
||||
|
||||
def makeMd5(self):
|
||||
m = md5();
|
||||
cur = self.fd.tell()
|
||||
self.getSize()
|
||||
self.fd.seek(0)
|
||||
while self.fd.tell() != self.size:
|
||||
m.update(self.fd.read(1024))
|
||||
self.fd.seek(cur)
|
||||
self.md5 = m.hexdigest()
|
||||
|
||||
def finishDownload(self):
|
||||
self.fd.close()
|
||||
newpathname = os.path.join(confdict['cachedir'], self.filename)
|
||||
os.rename(self.pathname, newpathname)
|
||||
self.pathname = newpathname
|
||||
self.fd = open(self.pathname, "a+b")
|
||||
|
||||
def delete(self):
|
||||
self.fd.close()
|
||||
os.remove(self.pathname)
|
||||
|
||||
class ClientSocket(threading.Thread):
|
||||
def __init__(self, sock, **other):
|
||||
threading.Thread.__init__(self, *other)
|
||||
self.socket = sock
|
||||
self.running = 1
|
||||
self.files = []
|
||||
|
||||
def close(self):
|
||||
self.running = 0
|
||||
|
||||
def reliableRead(self, size):
|
||||
totalread = ""
|
||||
while len(totalread) < size:
|
||||
read = self.socket.recv(size-len(totalread))
|
||||
if len(read) == 0:
|
||||
raise RuntimeError, "socket connection broken"
|
||||
totalread += read
|
||||
return totalread
|
||||
|
||||
def sendMsg(self, msg):
|
||||
if type(msg) == dict:
|
||||
msg = urllib.urlencode(msg,1)
|
||||
length = struct.pack("H", socket.htons(len(msg)))
|
||||
self.socket.sendall(length)
|
||||
self.socket.sendall(msg)
|
||||
|
||||
def readMsg(self, format=0):
|
||||
initsize = self.reliableRead(2)
|
||||
(length,) = struct.unpack("H", initsize)
|
||||
length = socket.ntohs(length)
|
||||
data = self.reliableRead(length)
|
||||
if format == 1:
|
||||
qs = cgi.parse_qs(data)
|
||||
return qs
|
||||
else:
|
||||
return data
|
||||
|
||||
def auth(self):
|
||||
authdata = self.readMsg(1)
|
||||
|
||||
if (authdata.has_key('username')):
|
||||
print "Trying connection for user %s" % authdata['username']
|
||||
|
||||
if (not authdata.has_key('username')) or (not authdata.has_key('password')):
|
||||
self.sendMsg("result=FAIL")
|
||||
return 0
|
||||
|
||||
print "Connecting to MySQL database"
|
||||
dbconn = MySQLdb.connect(host=config.get('mysql', 'host'),
|
||||
user=config.get('mysql', 'username'),
|
||||
passwd=config.get('mysql', 'password'),
|
||||
db=config.get('mysql', 'db'))
|
||||
|
||||
q = dbconn.cursor()
|
||||
m = md5();
|
||||
m.update(authdata['password'][0])
|
||||
encpw = m.hexdigest()
|
||||
try:
|
||||
q.execute("SELECT ID, Suspended, AccountTypeID FROM Users WHERE Username = '"+
|
||||
MySQLdb.escape_string(authdata['username'][0])+
|
||||
"' AND Passwd = '"+
|
||||
MySQLdb.escape_string(encpw)+
|
||||
"'")
|
||||
dbconn.close()
|
||||
except :
|
||||
self.sendMsg("result=SQLERR")
|
||||
return 0
|
||||
if q.rowcount == 0:
|
||||
self.sendMsg("result=FAIL")
|
||||
return 0
|
||||
row = q.fetchone()
|
||||
if row[1] != 0:
|
||||
self.sendMsg("result=FAIL")
|
||||
return 0
|
||||
if row[2] not in (2, 3):
|
||||
self.sendMsg("result=FAIL")
|
||||
return 0
|
||||
self.sendMsg("result=PASS")
|
||||
return 1
|
||||
|
||||
def readFileMeta(self):
|
||||
files = self.readMsg(1)
|
||||
print files
|
||||
# Actually do file checking, et al
|
||||
for i in range(int(files['numpkgs'][0])):
|
||||
self.files.append(ClientFile(files['name'+str(i)][0], int(files['size'+str(i)][0]), files['md5sum'+str(i)][0]))
|
||||
new_files = files.copy()
|
||||
for i in files:
|
||||
if i[:4] == 'size':
|
||||
clientfile = self.files[int(i[4:])]
|
||||
new_files[i] = str(clientfile.orig_size)
|
||||
if i[:6] == 'md5sum':
|
||||
del new_files[i]
|
||||
self.sendMsg(new_files)
|
||||
|
||||
def readFiles(self):
|
||||
for i in self.files:
|
||||
count = i.orig_size
|
||||
while count != i.actual_size:
|
||||
if count + 1024 > i.actual_size:
|
||||
i.fd.write(self.reliableRead(i.actual_size - count))
|
||||
count += i.actual_size - count
|
||||
else:
|
||||
i.fd.write(self.reliableRead(1024))
|
||||
count += 1024
|
||||
i.fd.flush()
|
||||
reply = {'numpkgs': len(self.files)}
|
||||
for i, v in enumerate(self.files):
|
||||
v.makeMd5()
|
||||
if v.actual_md5 == v.md5:
|
||||
reply['md5sum'+str(i)] = "PASS"
|
||||
v.finishDownload()
|
||||
else:
|
||||
reply['md5sum'+str(i)] = "FAIL"
|
||||
v.delete()
|
||||
self.sendMsg(reply)
|
||||
print self.readMsg()
|
||||
|
||||
def run(self):
|
||||
try:
|
||||
if not self.auth():
|
||||
print "Error authenticating."
|
||||
self.close()
|
||||
return
|
||||
self.readFileMeta()
|
||||
self.readFiles()
|
||||
except RuntimeError, err:
|
||||
if err.__str__() == "socket connection broken":
|
||||
print "Client disconnected, cleaning up"
|
||||
self.close()
|
||||
return
|
||||
|
||||
class ServerSocket(threading.Thread):
|
||||
def __init__(self, port, maxqueue, **other):
|
||||
threading.Thread.__init__(self, *other)
|
||||
self.running = 1
|
||||
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.socket.bind(('', port))
|
||||
self.socket.listen(maxqueue)
|
||||
self.clients = []
|
||||
|
||||
def _clean(self, client):
|
||||
if not client.isAlive():
|
||||
return 0
|
||||
return 1
|
||||
|
||||
def close(self):
|
||||
self.socket.close()
|
||||
self.running = 0
|
||||
|
||||
def run(self):
|
||||
while self.running:
|
||||
sread, swrite, serror = select.select([self.socket],[self.socket],[self.socket],1)
|
||||
if sread:
|
||||
(clientsocket, address) = self.socket.accept()
|
||||
print time.asctime(time.gmtime())
|
||||
print "New connection from " + str(address)
|
||||
ct = ClientSocket(clientsocket)
|
||||
ct.start()
|
||||
self.clients.append(ct)
|
||||
|
||||
self.clients = filter(self._clean, self.clients)
|
||||
self.socket.close()
|
||||
[x.close() for x in self.clients]
|
||||
[x.join() for x in self.clients]
|
||||
|
||||
def usage(name):
|
||||
print "usage: " + name + " [options]"
|
||||
print "options:"
|
||||
print " -c, --config Specify an alternate config file (default " + CONFIGFILE + ")"
|
||||
|
||||
def getDefaultConfig():
|
||||
confdict = {}
|
||||
confdict['port'] = 1034
|
||||
confdict['cachedir'] = '/var/cache/tupkgs/'
|
||||
confdict['incomingdir'] = '/var/cache/tupkgs/incomplete/'
|
||||
confdict['maxqueue'] = 5
|
||||
|
||||
return confdict
|
||||
|
||||
|
||||
confdict = getDefaultConfig()
|
||||
|
||||
def main(argv=None):
|
||||
if argv is None:
|
||||
argv = sys.argv
|
||||
|
||||
try:
|
||||
optlist, args = getopt.getopt(argv[1:], "c:", ["config="])
|
||||
except getopt.GetoptError:
|
||||
usage(argv[0])
|
||||
return 1
|
||||
|
||||
conffile = CONFIGFILE
|
||||
|
||||
for i, k in optlist:
|
||||
if i in ('-c', '--config'):
|
||||
conffile = k
|
||||
|
||||
if not os.path.isfile(conffile):
|
||||
print "Error: cannot access config file ("+conffile+")"
|
||||
usage(argv[0])
|
||||
return 1
|
||||
|
||||
config.read(conffile)
|
||||
|
||||
running = 1
|
||||
|
||||
print "Parsing config file"
|
||||
if config.has_section('tupkgs'):
|
||||
if config.has_option('tupkgs', 'port'):
|
||||
confdict['port'] = config.getint('tupkgs', 'port')
|
||||
if config.has_option('tupkgs', 'maxqueue'):
|
||||
confdict['maxqueue'] = config.getint('tupkgs', 'maxqueue')
|
||||
if config.has_option('tupkgs', 'cachedir'):
|
||||
confdict['cachedir'] = config.get('tupkgs', 'cachedir')
|
||||
if config.has_option('tupkgs', 'incomingdir'):
|
||||
confdict['incomingdir'] = config.get('tupkgs', 'incomingdir')
|
||||
|
||||
print "Verifying "+confdict['cachedir']+" and "+confdict['incomingdir']+" exist"
|
||||
if not os.path.isdir(confdict['cachedir']):
|
||||
print "Creating "+confdict['cachedir']
|
||||
os.mkdir(confdict['cachedir'], 0755)
|
||||
if not os.path.isdir(confdict['incomingdir']):
|
||||
print "Creating "+confdict['incomingdir']
|
||||
os.mkdir(confdict['incomingdir'], 0755)
|
||||
|
||||
print "Starting ServerSocket"
|
||||
servsock = ServerSocket(confdict['port'], confdict['maxqueue'])
|
||||
servsock.start()
|
||||
|
||||
try:
|
||||
while running:
|
||||
# Maybe do stuff here?
|
||||
time.sleep(10)
|
||||
except KeyboardInterrupt:
|
||||
running = 0
|
||||
|
||||
print "Waiting for threads to die"
|
||||
|
||||
servsock.close()
|
||||
|
||||
servsock.join()
|
||||
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
|
||||
# Python Indentation:
|
||||
# -------------------
|
||||
# Use tabs not spaces. If you use vim, the following comment will
|
||||
# configure it to use tabs.
|
||||
#
|
||||
# vim:noet:ts=2 sw=2 ft=python
|
|
@ -1,10 +0,0 @@
|
|||
[tupkgs]
|
||||
port = 1034
|
||||
cachedir = /var/cache/tupkgs
|
||||
incomingdir = /var/cache/tupkgs/incomplete
|
||||
|
||||
[mysql]
|
||||
username = aur
|
||||
password = aur
|
||||
host = localhost
|
||||
db = AUR
|
|
@ -1,622 +0,0 @@
|
|||
#!/usr/bin/python -O
|
||||
|
||||
import re
|
||||
import os
|
||||
import sys
|
||||
import pacman
|
||||
import getopt
|
||||
import MySQLdb
|
||||
import MySQLdb.connections
|
||||
import ConfigParser
|
||||
from subprocess import Popen, PIPE
|
||||
|
||||
###########################################################
|
||||
# Deal with configuration
|
||||
###########################################################
|
||||
|
||||
conffile = '/etc/tupkgs.conf'
|
||||
|
||||
config = ConfigParser.ConfigParser()
|
||||
|
||||
############################################################
|
||||
|
||||
# Define some classes we need
|
||||
class Version:
|
||||
def __init__(self):
|
||||
self.version = None
|
||||
self.file = None
|
||||
|
||||
class Package:
|
||||
def __init__(self):
|
||||
self.name = None
|
||||
self.category = None
|
||||
self.old = None
|
||||
self.new = None
|
||||
self.desc = None
|
||||
self.url = None
|
||||
self.depends = None
|
||||
self.sources = None
|
||||
|
||||
class PackageDatabase:
|
||||
def __init__(self, host, user, password, dbname):
|
||||
self.host = host
|
||||
self.user = user
|
||||
self.password = password
|
||||
self.dbname = dbname
|
||||
self.connection = MySQLdb.connect(host=host, user=user, passwd=password, db=dbname)
|
||||
|
||||
def cursor(self):
|
||||
try:
|
||||
self.connection.ping()
|
||||
except MySQLdb.OperationalError:
|
||||
self.connection = MySQLdb.connect(host=self.host, user=self.user, passwd=self.password, db=self.dbname)
|
||||
return self.connection.cursor()
|
||||
|
||||
def lookup(self, packagename):
|
||||
warning("DB: Looking up package: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT ID FROM Packages WHERE Name = '" +
|
||||
MySQLdb.escape_string(packagename) + "'")
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
return row[0]
|
||||
return None
|
||||
|
||||
def getCategoryID(self, package):
|
||||
category_id = self.lookupCategory(package.category)
|
||||
if (category_id == None):
|
||||
category_id = 1
|
||||
warning("DB: Got category ID '" + str(category_id) + "' for package '" + package.name + "'")
|
||||
return category_id
|
||||
|
||||
def insert(self, package, locationId):
|
||||
warning("DB: Inserting package: " + package.name)
|
||||
global repo_dir
|
||||
q = self.cursor()
|
||||
q.execute("INSERT INTO Packages " +
|
||||
"(Name, CategoryID, Version, FSPath, LocationID, SubmittedTS, Description, URL) VALUES ('" +
|
||||
MySQLdb.escape_string(package.name) + "', " +
|
||||
str(self.getCategoryID(package)) + ", '" +
|
||||
MySQLdb.escape_string(package.new.version) + "', '" +
|
||||
MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
str(locationId) + ", " +
|
||||
"UNIX_TIMESTAMP(), '" +
|
||||
MySQLdb.escape_string(str(package.desc)) + "', '" +
|
||||
MySQLdb.escape_string(str(package.url)) + "')")
|
||||
id = self.lookup(package.name)
|
||||
self.insertNewInfo(package, id, locationId)
|
||||
|
||||
def update(self, id, package, locationId):
|
||||
warning("DB: Updating package: " + package.name + " with id " + str(id))
|
||||
global repo_dir
|
||||
q = self.cursor()
|
||||
if (self.isdummy(package.name)):
|
||||
q.execute("UPDATE Packages SET " +
|
||||
"Version = '" + MySQLdb.escape_string(package.new.version) + "', " +
|
||||
"CategoryID = " + str(self.getCategoryID(package)) + ", " +
|
||||
"FSPath = '" + MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
"Description = '" + MySQLdb.escape_string(str(package.desc)) + "', " +
|
||||
"DummyPkg = 0, " +
|
||||
"SubmittedTS = UNIX_TIMESTAMP(), " +
|
||||
"URL = '" + MySQLdb.escape_string(str(package.url)) + "' " +
|
||||
"WHERE ID = " + str(id))
|
||||
else:
|
||||
q.execute("UPDATE Packages SET " +
|
||||
"Version = '" + MySQLdb.escape_string(package.new.version) + "', " +
|
||||
"CategoryID = " + str(self.getCategoryID(package)) + ", " +
|
||||
"FSPath = '" + MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
"Description = '" + MySQLdb.escape_string(str(package.desc)) + "', " +
|
||||
"ModifiedTS = UNIX_TIMESTAMP(), " +
|
||||
"URL = '" + MySQLdb.escape_string(str(package.url)) + "' " +
|
||||
"WHERE ID = " + str(id))
|
||||
self.insertNewInfo(package, id, locationId)
|
||||
|
||||
# Check to see if this is a move of a package from unsupported
|
||||
# to community, because we have to reset maintainer and location.
|
||||
|
||||
q = self.cursor()
|
||||
q.execute("SELECT LocationID FROM Packages WHERE ID = " + str(id))
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
if (row[0] != 3):
|
||||
q = self.cursor()
|
||||
q.execute("UPDATE Packages SET LocationID = 3, MaintainerUID = null WHERE ID = " + str(id))
|
||||
|
||||
def remove(self, id, locationId):
|
||||
warning("DB: Removing package with id: " + str(id))
|
||||
q = self.cursor()
|
||||
q.execute("DELETE FROM Packages WHERE " +
|
||||
"LocationID = " + str(locationId) + " AND ID = " + str(id))
|
||||
|
||||
def clearOldInfo(self, id):
|
||||
warning("DB: Clearing old info for package with id : " + str(id))
|
||||
q = self.cursor()
|
||||
q.execute("DELETE FROM PackageContents WHERE PackageID = " + str(id))
|
||||
q.execute("DELETE FROM PackageDepends WHERE PackageID = " + str(id))
|
||||
q.execute("DELETE FROM PackageSources WHERE PackageID = " + str(id))
|
||||
|
||||
def lookupOrDummy(self, packagename):
|
||||
retval = self.lookup(packagename)
|
||||
if (retval != None):
|
||||
return retval
|
||||
return self.createDummy(packagename)
|
||||
|
||||
def lookupCategory(self, categoryname):
|
||||
warning("DB: Looking up category: " + categoryname)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT ID from PackageCategories WHERE Category = '" + MySQLdb.escape_string(categoryname) + "'")
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
return row[0]
|
||||
return None
|
||||
|
||||
def createDummy(self, packagename):
|
||||
warning("DB: Creating dummy package for: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("INSERT INTO Packages " +
|
||||
"(Name, Description, LocationID, DummyPkg) " +
|
||||
"VALUES ('" +
|
||||
MySQLdb.escape_string(packagename) + "', '" +
|
||||
MySQLdb.escape_string("A dummy package") + "', 1, 1)")
|
||||
return self.lookup(packagename)
|
||||
|
||||
def insertNewInfo(self, package, id, locationId):
|
||||
q = self.cursor()
|
||||
|
||||
# First delete the old.
|
||||
self.clearOldInfo(id)
|
||||
|
||||
warning("DB: Inserting new package info for " + package.name +
|
||||
" with id " + str(id))
|
||||
|
||||
# PackageSources
|
||||
for source in package.sources:
|
||||
q.execute("INSERT INTO PackageSources (PackageID, Source) " +
|
||||
"VALUES (" + str(id) + ", '" + MySQLdb.escape_string(source) + "')")
|
||||
|
||||
# PackageDepends
|
||||
for dep in package.depends:
|
||||
depid = self.lookupOrDummy(dep)
|
||||
q.execute("INSERT INTO PackageDepends (PackageID, DepPkgID) " +
|
||||
"VALUES (" + str(id) + ", " + str(depid) + ")")
|
||||
|
||||
def isdummy(self, packagename):
|
||||
warning("DB: Looking up package: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT * FROM Packages WHERE Name = '" +
|
||||
MySQLdb.escape_string(packagename) + "' AND DummyPkg = 1")
|
||||
if (q.rowcount != 0):
|
||||
return True
|
||||
return False
|
||||
|
||||
############################################################
|
||||
# Functions for walking the file trees
|
||||
############################################################
|
||||
|
||||
def filesForRegexp(topdir, regexp):
|
||||
retval = []
|
||||
def matchfile(regexp, dirpath, namelist):
|
||||
for name in namelist:
|
||||
if (regexp.match(name)):
|
||||
retval.append(os.path.join(dirpath, name))
|
||||
os.path.walk(topdir, matchfile, regexp)
|
||||
return retval
|
||||
|
||||
def packagesInTree(topdir):
|
||||
return filesForRegexp(topdir, re.compile("^.*\.pkg\.tar\.gz$"))
|
||||
|
||||
def pkgbuildsInTree(topdir):
|
||||
return filesForRegexp(topdir, re.compile("^PKGBUILD$"))
|
||||
|
||||
############################################################
|
||||
# Function for testing if two files are identical
|
||||
############################################################
|
||||
|
||||
def areFilesIdentical(file_a, file_b):
|
||||
command = "cmp '" + file_a + "' '" + file_b + "' >/dev/null"
|
||||
retval = os.system(command)
|
||||
if (retval == 0):
|
||||
return True
|
||||
return False
|
||||
|
||||
############################################################
|
||||
# Function for fetching info from PKGBUILDs and packages
|
||||
############################################################
|
||||
|
||||
def infoFromPackageFile(filename):
|
||||
pkg = os.path.basename(filename)
|
||||
m = re.compile("(?P<pkgname>.*)-(?P<pkgver>.*)-(?P<pkgrel>.*).pkg.tar.gz").search(pkg)
|
||||
if not m:
|
||||
raise Exception("Non-standard filename")
|
||||
else:
|
||||
return m.group('pkgname'), m.group('pkgver') + "-" + m.group('pkgrel')
|
||||
|
||||
def infoFromPkgbuildFile(filename):
|
||||
# first grab the category based on the file path
|
||||
pkgdirectory = os.path.dirname(filename)
|
||||
catdirectory = os.path.dirname(pkgdirectory)
|
||||
m = re.match(r".*/([^/]+)$", catdirectory)
|
||||
if (m):
|
||||
category = m.group(1)
|
||||
else:
|
||||
category = "none"
|
||||
|
||||
# open and source the file
|
||||
pf = Popen("/bin/bash",
|
||||
shell=True, bufsize=0, stdin=PIPE, stdout=PIPE, close_fds=True)
|
||||
|
||||
print >>pf.stdin, ". " + filename
|
||||
#print "PKGBUILD: " + filename
|
||||
|
||||
# get pkgname
|
||||
print >>pf.stdin, 'echo $pkgname'
|
||||
pkgname = pf.stdout.readline().strip()
|
||||
|
||||
#print "PKGBUILD: pkgname: " + pkgname
|
||||
|
||||
# get pkgver
|
||||
print >>pf.stdin, 'echo $pkgver'
|
||||
pkgver = pf.stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgver: " + pkgver
|
||||
|
||||
# get pkgrel
|
||||
print >>pf.stdin, 'echo $pkgrel'
|
||||
pkgrel = pf.stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgrel: " + pkgrel
|
||||
|
||||
# get url
|
||||
print >>pf.stdin, 'echo $url'
|
||||
url = pf.stdout.readline().strip()
|
||||
#print "PKGBUILD: url: " + url
|
||||
|
||||
# get desc
|
||||
print >>pf.stdin, 'echo $pkgdesc'
|
||||
pkgdesc = pf.stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgdesc: " + pkgdesc
|
||||
|
||||
# get source array
|
||||
print >>pf.stdin, 'echo ${source[*]}'
|
||||
source = (pf.stdout.readline().strip()).split(" ")
|
||||
|
||||
# get depends array
|
||||
print >>pf.stdin, 'echo ${depends[*]}'
|
||||
depends = (pf.stdout.readline().strip()).split(" ")
|
||||
|
||||
# clean up
|
||||
pf.stdin.close()
|
||||
pf.stdout.close()
|
||||
|
||||
return pkgname, pkgver + "-" + pkgrel, pkgdesc, url, depends, source, category
|
||||
|
||||
def infoFromPkgbuildFileWorse(filename):
|
||||
# load the file with pacman library
|
||||
pkg = pacman.load(filename)
|
||||
return (pkg.name, pkg.version + "-" + pkg.release, pkg.desc,
|
||||
pkg.url, pkg.depends, pkg.source)
|
||||
|
||||
############################################################
|
||||
# Functions for doing the final steps of execution
|
||||
############################################################
|
||||
|
||||
def execute(command):
|
||||
global switches
|
||||
print(command)
|
||||
if not (switches.get("-n") == True):
|
||||
return os.system(command)
|
||||
return 0
|
||||
|
||||
def copyFileToRepo(filename, repodir):
|
||||
destfile = os.path.join(repodir, os.path.basename(filename))
|
||||
command = "cp --preserve=timestamps '" + filename + "' '" + destfile + "'"
|
||||
return execute(command)
|
||||
|
||||
def deleteFile(filename):
|
||||
command = "rm '" + filename + "'"
|
||||
return execute(command)
|
||||
|
||||
def runRepoAdd(repo, package):
|
||||
global havefakeroot
|
||||
targetDB = os.path.join(repo, "community.db.tar.gz")
|
||||
destfile = os.path.join(repo, os.path.basename(package.new.file))
|
||||
if havefakeroot:
|
||||
command = "fakeroot repo-add '" + targetDB + "' '" + destfile + "'"
|
||||
else:
|
||||
command = "repo-add '" + targetDB + "' '" + destfile + "'"
|
||||
return execute(command)
|
||||
|
||||
def runRepoRemove(repo, pkgname):
|
||||
global havefakeroot
|
||||
targetDB = os.path.join(repo, "community.db.tar.gz")
|
||||
if havefakeroot:
|
||||
command = "fakeroot repo-remove '" + targetDB + "' '"+ pkgname + "'"
|
||||
else:
|
||||
command = "repo-remove '" + targetDB + "' '" + pkgname +"'"
|
||||
return execute(command)
|
||||
|
||||
############################################################
|
||||
# Functions for error handling
|
||||
############################################################
|
||||
|
||||
def warning(string):
|
||||
print >>sys.stderr, string
|
||||
|
||||
had_error = 0
|
||||
def error(string):
|
||||
global had_error
|
||||
warning(string)
|
||||
had_error = 1
|
||||
|
||||
def usage(name):
|
||||
print "Usage: %s [options] <repo_dir> <pkgbuild_tree> <build_tree>" % name
|
||||
print "Options:"
|
||||
print " -c, --config Specify a path to the config file."
|
||||
print " -n Don't actually perform any action on the repo."
|
||||
print " --delete Delete duplicate and temporary pkgs."
|
||||
print " --paranoid Warn of duplicate pkgs that aren't identical."
|
||||
sys.exit(1)
|
||||
|
||||
############################################################
|
||||
# MAIN
|
||||
############################################################
|
||||
|
||||
# ARGUMENTS
|
||||
# See usage() for specifying arguments.
|
||||
|
||||
try:
|
||||
optlist, args = getopt.getopt(sys.argv[1:], 'c:n',
|
||||
['config=', 'delete', 'paranoid'])
|
||||
except getopt.GetoptError:
|
||||
usage(sys.argv[0])
|
||||
|
||||
switches = {}
|
||||
for opt in optlist:
|
||||
switches[opt[0]] = 1
|
||||
|
||||
# Check for required arguments.
|
||||
if (len(args) < 3):
|
||||
usage(sys.argv[0])
|
||||
|
||||
for opt, value in optlist:
|
||||
if opt in ('-c', '--config'):
|
||||
conffile = value
|
||||
|
||||
try:
|
||||
repo_dir, pkgbuild_dir, build_dir = args
|
||||
except ValueError:
|
||||
usage(sys.argv[0])
|
||||
|
||||
if not os.path.isfile(conffile):
|
||||
print "Error: cannot access config file (%s)" % conffile
|
||||
sys.exit(1)
|
||||
|
||||
config.read(conffile)
|
||||
config_use_db = config.has_section('mysql')
|
||||
|
||||
# Make sure we can use fakeroot, warn if not
|
||||
havefakeroot = False
|
||||
if os.access('/usr/bin/fakeroot', os.X_OK):
|
||||
havefakeroot = True
|
||||
else:
|
||||
warning("Not using fakeroot for repo db generation")
|
||||
|
||||
# Open the database if we need it so we find out now if we can't!
|
||||
if config_use_db:
|
||||
try:
|
||||
db = PackageDatabase(config.get('mysql', 'host'),
|
||||
config.get('mysql', 'username'),
|
||||
config.get('mysql', 'password'),
|
||||
config.get('mysql', 'db'))
|
||||
except:
|
||||
print "Error: Could not connect to the database %s at %s." % (
|
||||
config.get('mysql', 'db'), config.get('mysql', 'host'))
|
||||
sys.exit(1)
|
||||
|
||||
# Set up the lists and tables
|
||||
packages = dict()
|
||||
copy = list()
|
||||
delete = list()
|
||||
|
||||
dbremove = list()
|
||||
dbmodify = list()
|
||||
|
||||
# PASS 1: PARSING/LOCATING
|
||||
#
|
||||
# A) Go through the PKGBUILD tree
|
||||
# For each PKGBUILD, create a Package with new Version containing
|
||||
# parsed version and and None for file
|
||||
|
||||
a_files = pkgbuildsInTree(pkgbuild_dir)
|
||||
for a_file in a_files:
|
||||
pkgname, ver, desc, url, depends, sources, category = infoFromPkgbuildFile(a_file)
|
||||
|
||||
# Error (and skip) if we encounter any invalid PKGBUILD files
|
||||
if (pkgname == None or ver == None):
|
||||
error("Pkgbuild '" + a_file + "' is invalid!")
|
||||
continue
|
||||
|
||||
# Error (and skip) if we encounter any duplicate package names
|
||||
# in the PKGBUILDs
|
||||
if (packages.get(pkgname)):
|
||||
error("Pkgbuild '" + a_file + "' is a duplicate!")
|
||||
continue
|
||||
|
||||
version = Version()
|
||||
version.version = ver
|
||||
version.file = None
|
||||
|
||||
package = Package()
|
||||
package.name = pkgname
|
||||
package.category = category
|
||||
package.desc = desc
|
||||
package.url = url
|
||||
package.depends = depends
|
||||
package.sources = sources
|
||||
package.new = version
|
||||
|
||||
# print "Package: desc " + desc
|
||||
|
||||
packages[pkgname] = package
|
||||
|
||||
# B) Go through the old repo dir
|
||||
# For each package file we encounter, create a Package with old
|
||||
# Version containing parsed version and filepath
|
||||
|
||||
b_files = packagesInTree(repo_dir)
|
||||
for b_file in b_files:
|
||||
pkgname, ver = infoFromPackageFile(b_file)
|
||||
|
||||
version = Version()
|
||||
version.version = ver
|
||||
version.file = b_file
|
||||
|
||||
package = packages.get(pkgname)
|
||||
if (package == None):
|
||||
package = Package()
|
||||
package.name = pkgname
|
||||
packages[pkgname] = package
|
||||
package.old = version
|
||||
|
||||
# C) Go through the build tree
|
||||
# For each package file we encounter:
|
||||
# 1 - look up the package name; if it fails, ignore the file (no error)
|
||||
# 2 - if package.new == None, ignore the package (no error)
|
||||
# 3 - if package.new.version doesn't match, then skip (no error)
|
||||
# 4 - if package.new.file == None, point it to this file
|
||||
# otherwise, log an error (and skip)
|
||||
|
||||
c_files = packagesInTree(build_dir)
|
||||
for c_file in c_files:
|
||||
pkgname, ver = infoFromPackageFile(c_file)
|
||||
|
||||
# 1
|
||||
package = packages.get(pkgname)
|
||||
if (package == None):
|
||||
continue
|
||||
|
||||
# 2
|
||||
if (package.new == None):
|
||||
continue
|
||||
|
||||
# 3
|
||||
if (package.new.version != ver):
|
||||
continue
|
||||
|
||||
# 4
|
||||
if (package.new.file == None):
|
||||
package.new.file = c_file
|
||||
continue
|
||||
else:
|
||||
error("Duplicate new file '" + c_file + "'")
|
||||
continue
|
||||
|
||||
# PASS 2: CHECKING
|
||||
#
|
||||
# Go through the package collection
|
||||
# 1 - if package has no new, place its old file on the "delete" list (and package on "dbremove")
|
||||
# 2 - if package has a new but no new.file, and old file doesn't
|
||||
# have the same version, then error (because gensync won't rebuild)
|
||||
# 3 - if package has no old, add new file to "copy" list into repo dir (and package on "dbmodify")
|
||||
# 4 - if new == old and paranoid is set, compare the files and error if not the same;
|
||||
# otherwise just skip (no update)
|
||||
# 5 - if we got here, it's a legit nontrivial new version which we allow
|
||||
# add entry to "delete" list for old file and "copy" list for
|
||||
# new file into repo dir (and package to "dbmodify")
|
||||
|
||||
for package in packages.values():
|
||||
# 1
|
||||
if (package.new == None):
|
||||
delete.append(package.old.file)
|
||||
dbremove.append(package)
|
||||
continue
|
||||
|
||||
# 2
|
||||
if (package.new.file == None):
|
||||
if (package.old == None or package.old.file == None or
|
||||
package.old.version != package.new.version):
|
||||
errstr = "No new package supplied for " + package.name + " " + package.new.version + "!"
|
||||
error(errstr)
|
||||
continue
|
||||
|
||||
# 3
|
||||
if (package.old == None):
|
||||
copy.append(package.new.file)
|
||||
dbmodify.append(package)
|
||||
continue
|
||||
|
||||
# 4
|
||||
if (package.old.version == package.new.version):
|
||||
if (switches.get("--paranoid") == True and package.new.file != None):
|
||||
if not (areFilesIdentical(package.old.file, package.new.file)):
|
||||
warning("New package file with identical version '" +
|
||||
package.new.file + "' is different than the old one:")
|
||||
if (switches.get("--delete") == True):
|
||||
warning(" Deleting the new file.")
|
||||
delete.append(package.new.file)
|
||||
else:
|
||||
warning(" Ignoring the new file.")
|
||||
continue
|
||||
|
||||
# 5
|
||||
delete.append(package.old.file)
|
||||
copy.append(package.new.file)
|
||||
dbmodify.append(package)
|
||||
continue
|
||||
|
||||
## IF WE HAVE HAD ANY ERRORS AT THIS POINT, ABORT! ##
|
||||
if (had_error == 1):
|
||||
error("Aborting due to errors.")
|
||||
sys.exit(-1)
|
||||
|
||||
# PASS 3: EXECUTION
|
||||
#
|
||||
|
||||
if config_use_db:
|
||||
# First, do all the database updates if asked for
|
||||
for package in dbremove:
|
||||
id = db.lookup(package.name)
|
||||
# Note: this could remove a package from unsupported; probably want to restrict to locationId and/or non-dummy
|
||||
if (id != None):
|
||||
db.clearOldInfo(id)
|
||||
db.remove(id, 3)
|
||||
|
||||
for package in dbmodify:
|
||||
warning("DB: Package in dbmodify: " + package.name)
|
||||
id = db.lookup(package.name)
|
||||
if (id == None):
|
||||
db.insert(package, 3)
|
||||
else:
|
||||
db.update(id, package, 3)
|
||||
|
||||
# Copy
|
||||
for file in copy:
|
||||
retval = copyFileToRepo(file, repo_dir)
|
||||
if (retval != 0):
|
||||
error("Could not copy file to repo: '" + file + "'")
|
||||
sys.exit(-1)
|
||||
|
||||
# Delete (second, for safety's sake)
|
||||
for file in delete:
|
||||
deleteFile(file)
|
||||
|
||||
# Now that we've copied new files and deleted, we should delete the source
|
||||
# files, if we're supposed to
|
||||
if (switches.get("--delete") == True):
|
||||
for file in copy:
|
||||
deleteFile(file)
|
||||
|
||||
# Run updatesync where it is needed
|
||||
for package in dbremove:
|
||||
retval = runRepoRemove(repo_dir, package.name)
|
||||
if (retval != 0):
|
||||
error("repo-remove returned an error!")
|
||||
sys.exit(-1)
|
||||
for package in dbmodify:
|
||||
retval = runRepoAdd(repo_dir, package)
|
||||
if (retval != 0):
|
||||
error("repo-add returned an error!")
|
||||
sys.exit(-1)
|
||||
|
||||
# vim: ft=python ts=2 sw=2 et
|
|
@ -1,513 +0,0 @@
|
|||
#!/usr/bin/python -O
|
||||
|
||||
import re
|
||||
import os
|
||||
import sys
|
||||
import pacman
|
||||
import getopt
|
||||
import glob
|
||||
import MySQLdb
|
||||
import MySQLdb.connections
|
||||
import ConfigParser
|
||||
|
||||
###########################################################
|
||||
# Deal with configuration
|
||||
###########################################################
|
||||
|
||||
conffile = '/etc/tupkgs.conf'
|
||||
|
||||
if not os.path.isfile(conffile):
|
||||
print "Error: cannot access config file ("+conffile+")"
|
||||
usage(argv[0])
|
||||
sys.exit(1)
|
||||
|
||||
config = ConfigParser.ConfigParser()
|
||||
config.read(conffile)
|
||||
|
||||
############################################################
|
||||
|
||||
# Define some classes we need
|
||||
class Version:
|
||||
def __init__(self):
|
||||
self.version = None
|
||||
self.file = None
|
||||
|
||||
class Package:
|
||||
def __init__(self):
|
||||
self.name = None
|
||||
self.category = None
|
||||
self.old = None
|
||||
self.new = None
|
||||
self.desc = None
|
||||
self.url = None
|
||||
self.depends = None
|
||||
self.sources = None
|
||||
|
||||
class PackageDatabase:
|
||||
def __init__(self, host, user, password, dbname):
|
||||
self.host = host
|
||||
self.user = user
|
||||
self.password = password
|
||||
self.dbname = dbname
|
||||
self.connection = MySQLdb.connect(host=host, user=user, passwd=password, db=dbname)
|
||||
def cursor(self):
|
||||
return self.connection.cursor()
|
||||
def lookup(self, packagename):
|
||||
warning("DB: Looking up package: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT ID FROM Packages WHERE Name = '" +
|
||||
MySQLdb.escape_string(packagename) + "'")
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
return row[0]
|
||||
return None
|
||||
def getCategoryID(self, package):
|
||||
category_id = self.lookupCategory(package.category)
|
||||
if (category_id == None):
|
||||
category_id = 1
|
||||
warning("DB: Got category ID '" + str(category_id) + "' for package '" + package.name + "'")
|
||||
return category_id
|
||||
def insert(self, package, locationId):
|
||||
warning("DB: Inserting package: " + package.name)
|
||||
global repo_dir
|
||||
q = self.cursor()
|
||||
q.execute("INSERT INTO Packages " +
|
||||
"(Name, CategoryID, Version, FSPath, LocationID, SubmittedTS, Description, URL) VALUES ('" +
|
||||
MySQLdb.escape_string(package.name) + "', " +
|
||||
str(self.getCategoryID(package)) + ", '" +
|
||||
MySQLdb.escape_string(package.new.version) + "', '" +
|
||||
MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
str(locationId) + ", " +
|
||||
"UNIX_TIMESTAMP(), '" +
|
||||
MySQLdb.escape_string(str(package.desc)) + "', '" +
|
||||
MySQLdb.escape_string(str(package.url)) + "')")
|
||||
id = self.lookup(package.name)
|
||||
self.insertNewInfo(package, id, locationId)
|
||||
def update(self, id, package, locationId):
|
||||
warning("DB: Updating package: " + package.name + " with id " + str(id))
|
||||
global repo_dir
|
||||
q = self.cursor()
|
||||
if (self.isdummy(package.name)):
|
||||
q.execute("UPDATE Packages SET " +
|
||||
"Version = '" + MySQLdb.escape_string(package.new.version) + "', " +
|
||||
"CategoryID = " + str(self.getCategoryID(package)) + ", " +
|
||||
"FSPath = '" + MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
"Description = '" + MySQLdb.escape_string(str(package.desc)) + "', " +
|
||||
"DummyPkg = 0, " +
|
||||
"SubmittedTS = UNIX_TIMESTAMP(), " +
|
||||
"URL = '" + MySQLdb.escape_string(str(package.url)) + "' " +
|
||||
"WHERE ID = " + str(id))
|
||||
else:
|
||||
q.execute("UPDATE Packages SET " +
|
||||
"Version = '" + MySQLdb.escape_string(package.new.version) + "', " +
|
||||
"CategoryID = " + str(self.getCategoryID(package)) + ", " +
|
||||
"FSPath = '" + MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
"Description = '" + MySQLdb.escape_string(str(package.desc)) + "', " +
|
||||
"ModifiedTS = UNIX_TIMESTAMP(), " +
|
||||
"URL = '" + MySQLdb.escape_string(str(package.url)) + "' " +
|
||||
"WHERE ID = " + str(id))
|
||||
self.insertNewInfo(package, id, locationId)
|
||||
# we must lastly check to see if this is a move of a package from
|
||||
# unsupported to community, because we'd have to reset maintainer and location
|
||||
q = self.cursor()
|
||||
q.execute("SELECT LocationID FROM Packages WHERE ID = " + str(id))
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
if (row[0] != 3):
|
||||
q = self.cursor()
|
||||
q.execute("UPDATE Packages SET LocationID = 3, MaintainerUID = null WHERE ID = " + str(id))
|
||||
def remove(self, id, locationId):
|
||||
warning("DB: Removing package with id: " + str(id))
|
||||
q = self.cursor()
|
||||
q.execute("DELETE FROM Packages WHERE " +
|
||||
"LocationID = " + str(locationId) + " AND ID = " + str(id))
|
||||
def clearOldInfo(self, id):
|
||||
warning("DB: Clearing old info for package with id : " + str(id))
|
||||
q = self.cursor()
|
||||
q.execute("DELETE FROM PackageContents WHERE PackageID = " + str(id))
|
||||
q.execute("DELETE FROM PackageDepends WHERE PackageID = " + str(id))
|
||||
q.execute("DELETE FROM PackageSources WHERE PackageID = " + str(id))
|
||||
def lookupOrDummy(self, packagename):
|
||||
retval = self.lookup(packagename)
|
||||
if (retval != None):
|
||||
return retval
|
||||
return self.createDummy(packagename)
|
||||
def lookupCategory(self, categoryname):
|
||||
warning("DB: Looking up category: " + categoryname)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT ID from PackageCategories WHERE Category = '" + MySQLdb.escape_string(categoryname) + "'")
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
return row[0]
|
||||
return None
|
||||
def createDummy(self, packagename):
|
||||
warning("DB: Creating dummy package for: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("INSERT INTO Packages " +
|
||||
"(Name, Description, LocationID, DummyPkg) " +
|
||||
"VALUES ('" +
|
||||
MySQLdb.escape_string(packagename) + "', '" +
|
||||
MySQLdb.escape_string("A dummy package") + "', 1, 1)")
|
||||
return self.lookup(packagename)
|
||||
def insertNewInfo(self, package, id, locationId):
|
||||
q = self.cursor()
|
||||
|
||||
# first delete the old; this is never bad
|
||||
self.clearOldInfo(id)
|
||||
|
||||
warning("DB: Inserting new package info for " + package.name +
|
||||
" with id " + str(id))
|
||||
|
||||
# PackageSources
|
||||
for source in package.sources:
|
||||
q.execute("INSERT INTO PackageSources (PackageID, Source) " +
|
||||
"VALUES (" + str(id) + ", '" + source + "')")
|
||||
# PackageDepends
|
||||
for dep in package.depends:
|
||||
depid = self.lookupOrDummy(dep)
|
||||
q.execute("INSERT INTO PackageDepends (PackageID, DepPkgID) " +
|
||||
"VALUES (" + str(id) + ", " + str(depid) + ")")
|
||||
def isdummy(self, packagename):
|
||||
warning("DB: Looking up package: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT * FROM Packages WHERE Name = '" +
|
||||
MySQLdb.escape_string(packagename) + "' AND DummyPkg = 1")
|
||||
if (q.rowcount != 0):
|
||||
return True
|
||||
return False
|
||||
|
||||
############################################################
|
||||
# Functions for walking the file trees
|
||||
############################################################
|
||||
|
||||
def filesForRegexp(topdir, regexp):
|
||||
retval = []
|
||||
def matchfile(regexp, dirpath, namelist):
|
||||
for name in namelist:
|
||||
if (regexp.match(name)):
|
||||
retval.append(os.path.join(dirpath, name))
|
||||
os.path.walk(topdir, matchfile, regexp)
|
||||
return retval
|
||||
|
||||
def packagesInTree(topdir):
|
||||
return filesForRegexp(topdir, re.compile("^.*\.pkg\.tar\.gz$"))
|
||||
|
||||
def pkgbuildsInTree(topdir):
|
||||
return filesForRegexp(topdir, re.compile("^PKGBUILD$"))
|
||||
|
||||
############################################################
|
||||
# Function for testing if two files are identical
|
||||
############################################################
|
||||
|
||||
def areFilesIdentical(file_a, file_b):
|
||||
command = "cmp '" + file_a + "' '" + file_b + "' >/dev/null"
|
||||
retval = os.system(command)
|
||||
if (retval == 0):
|
||||
return True
|
||||
return False
|
||||
|
||||
############################################################
|
||||
# Function for fetching info from PKGBUILDs and packages
|
||||
############################################################
|
||||
|
||||
def infoFromPackageFile(filename):
|
||||
pkg = pacman.load(filename)
|
||||
return pkg.name, pkg.version + "-" + pkg.release
|
||||
|
||||
def infoFromPkgbuildFile(filename):
|
||||
# first grab the category based on the file path
|
||||
pkgdirectory = os.path.dirname(filename)
|
||||
catdirectory = os.path.dirname(pkgdirectory)
|
||||
m = re.match(r".*/([^/]+)$", catdirectory)
|
||||
if (m):
|
||||
category = m.group(1)
|
||||
else:
|
||||
category = "none"
|
||||
|
||||
# open and source the file
|
||||
pf_stdin, pf_stdout = os.popen2("/bin/bash", 't', 0)
|
||||
print >>pf_stdin, ". " + filename
|
||||
#print "PKGBUILD: " + filename
|
||||
|
||||
# get pkgname
|
||||
print >>pf_stdin, 'echo $pkgname'
|
||||
pkgname = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgname: " + pkgname
|
||||
|
||||
# get pkgver
|
||||
print >>pf_stdin, 'echo $pkgver'
|
||||
pkgver = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgver: " + pkgver
|
||||
|
||||
# get pkgrel
|
||||
print >>pf_stdin, 'echo $pkgrel'
|
||||
pkgrel = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgrel: " + pkgrel
|
||||
|
||||
# get url
|
||||
print >>pf_stdin, 'echo $url'
|
||||
url = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: url: " + url
|
||||
|
||||
# get desc
|
||||
print >>pf_stdin, 'echo $pkgdesc'
|
||||
pkgdesc = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgdesc: " + pkgdesc
|
||||
|
||||
# get source array
|
||||
print >>pf_stdin, 'echo ${source[*]}'
|
||||
source = (pf_stdout.readline().strip()).split(" ")
|
||||
|
||||
# get depends array
|
||||
print >>pf_stdin, 'echo ${depends[*]}'
|
||||
depends = (pf_stdout.readline().strip()).split(" ")
|
||||
|
||||
# clean up
|
||||
pf_stdin.close()
|
||||
pf_stdout.close()
|
||||
|
||||
return pkgname, pkgver + "-" + pkgrel, pkgdesc, url, depends, source, category
|
||||
|
||||
def infoFromPkgbuildFileWorse(filename):
|
||||
# load the file with pacman library
|
||||
pkg = pacman.load(filename)
|
||||
return (pkg.name, pkg.version + "-" + pkg.release, pkg.desc,
|
||||
pkg.url, pkg.depends, pkg.source)
|
||||
|
||||
############################################################
|
||||
# Functions for doing the final steps of execution
|
||||
############################################################
|
||||
|
||||
def execute(command):
|
||||
global switches
|
||||
print(command)
|
||||
if not (switches.get("-n") == True):
|
||||
return os.system(command)
|
||||
return 0
|
||||
|
||||
def copyFileToRepo(filename, repodir):
|
||||
destfile = os.path.join(repodir, os.path.basename(filename))
|
||||
command = "cp --preserve=timestamps '" + filename + "' '" + destfile + "'"
|
||||
return execute(command)
|
||||
|
||||
def deleteFile(filename):
|
||||
command = "rm '" + filename + "'"
|
||||
return execute(command)
|
||||
|
||||
def runGensync(repo, pkgbuild):
|
||||
#target = os.path.join(repo, os.path.basename(repo) + ".db.tar.gz")
|
||||
target = os.path.join(repo, "community.db.tar.gz")
|
||||
command = "gensync '" + pkgbuild + "' '" + target + "'"
|
||||
return execute(command)
|
||||
|
||||
def runUpdatesyncUpd(repo, pkgbuild):
|
||||
targetDB = os.path.join(repo, "community.db.tar.gz")
|
||||
command = "updatesync upd '" + targetDB + "' '" + pkgbuild + "' '" + repo +"'"
|
||||
return execute(command)
|
||||
|
||||
def runUpdatesyncDel(repo, pkgname):
|
||||
targetDB = os.path.join(repo, "community.db.tar.gz")
|
||||
command = "updatesync del '" + targetDB + "' '" + pkgname +"'"
|
||||
return execute(command)
|
||||
|
||||
############################################################
|
||||
# Functions for error handling
|
||||
############################################################
|
||||
|
||||
def warning(string):
|
||||
print >>sys.stderr, string
|
||||
|
||||
had_error = 0
|
||||
def error(string):
|
||||
global had_error
|
||||
warning(string)
|
||||
had_error = 1
|
||||
|
||||
############################################################
|
||||
# MAIN
|
||||
############################################################
|
||||
# The purpose of this version of tupkgupdate is to avoid
|
||||
# doing as much time and machine-intensive work as the full
|
||||
# tupkgupdate does, by relying on the state of the mysql
|
||||
# database to be correct. Since it usually is, this is a
|
||||
# good means for effecting updates more quickly, and won't
|
||||
# be a processor hog.
|
||||
#
|
||||
# Note that this version CANNOT HANDLE deleted packages. Those
|
||||
# will have to be processed less frequently. The only way to
|
||||
# know a package for sure is missing is to look through all
|
||||
# the PKGBUILDs which is what we're trying to avoid.
|
||||
#
|
||||
# ARGUMENTS
|
||||
#
|
||||
# tupkgupdate [-n] [--delete] [--paranoid] <repo_dir> <pkgbuild_dir> <build_dir>
|
||||
|
||||
# First call getopt
|
||||
switch_list,args_proper = getopt.getopt(sys.argv[1:], 'n',
|
||||
[ "delete", "paranoid" ])
|
||||
switches = {}
|
||||
for switch in switch_list:
|
||||
switches[switch[0]] = 1
|
||||
|
||||
# Then handle the remaining arguments
|
||||
if (len(args_proper) < 3):
|
||||
print >>sys.stderr, "syntax: tupkgupdate [-n] [--delete] [--paranoid] <repo_dir> <pkgbuild_tree> <build_tree>"
|
||||
sys.exit(-1)
|
||||
|
||||
repo_dir, pkgbuild_dir, build_dir = args_proper
|
||||
|
||||
# Open the database so we find out now if we can't!
|
||||
db = PackageDatabase(config.get('mysql', 'host'),
|
||||
config.get('mysql', 'username'),
|
||||
config.get('mysql', 'password'),
|
||||
config.get('mysql', 'db'))
|
||||
|
||||
# Set up the lists and tables
|
||||
packages = dict()
|
||||
copy = list()
|
||||
delete = list()
|
||||
|
||||
dbremove = list()
|
||||
dbmodify = list()
|
||||
|
||||
# PASS 1: PARSING/LOCATING/CHECKING
|
||||
#
|
||||
# Go through each package file in the incoming/build tree
|
||||
c_files = packagesInTree(build_dir)
|
||||
for c_file in c_files:
|
||||
|
||||
# 1 - fetch package.name and package.new.{version,file} from package file
|
||||
# a) error and skip if invalid package file
|
||||
pkgname, ver = infoFromPackageFile(c_file)
|
||||
if (pkgname == None or ver == None):
|
||||
error("Package file '" + c_file + "' is invalid!")
|
||||
continue
|
||||
|
||||
# b) error and skip if this is a duplicate we've already seen
|
||||
if (packages.get(pkgname)):
|
||||
error("Pkgbuild '" + a_file + "' is a duplicate!")
|
||||
continue
|
||||
|
||||
# c) create the package structure
|
||||
package = Package()
|
||||
package.name = pkgname
|
||||
|
||||
version = Version()
|
||||
version.version = ver
|
||||
version.file = c_file
|
||||
|
||||
package.new = version
|
||||
|
||||
# 2 - use the package name to find/parse the PKGBUILD
|
||||
ver, desc, url, depends, sources, category = findPkgbuildAndGetInfo(pkgname)
|
||||
|
||||
# a) if no PKGBUILD file, ignore the built file (log a warning, and skip)
|
||||
if (ver == None):
|
||||
# no way to log warnings at the moment
|
||||
continue
|
||||
|
||||
# b) check that PKGBUILD pkgver-pkgrel == package.new.version (or log error and skip)
|
||||
if (ver != package.new.version):
|
||||
error("For package '" + pkgname + "' the PKGBUILD ver '" + ver + "' doesn't match the binary package ver '" + package.new.version + "')
|
||||
continue;
|
||||
|
||||
# c) populate package.{desc,url,depends,sources,category}
|
||||
package.desc = desc
|
||||
package.url = url
|
||||
package.depends = depends
|
||||
package.sources = sources
|
||||
package.category = category
|
||||
|
||||
# 3 - fill in package.old.version from the mysql db ([community] only) and perform checking:
|
||||
ver = db.getCommunityVersion(package.name)
|
||||
|
||||
# a) if no record in mysql db, verify package not in repo (or log error and skip)
|
||||
if (ver == None):
|
||||
evil_packages = glob.glob(os.path.join(repodir, package.name + "-*.pkg.tar.gz"));
|
||||
found_evil_package = 0;
|
||||
for evil_package in evil_packages:
|
||||
pkgname, ver = infoFromPackageFile(evil_package)
|
||||
if (pkgname == package.name):
|
||||
error("No record of package '" + package.name + "' in [community] yet it has a package in '" + evil_package + "'")
|
||||
found_evil_package = 1;
|
||||
continue;
|
||||
if (found_evil_package == 1):
|
||||
continue;
|
||||
|
||||
# b) if record in mysql db, infer and verify proper package.old.file in repo (or log error and skip)
|
||||
inferred_old_filepath = os.path.join(repodir, package.name + "-" + ver + ".pkg.tar.gz")
|
||||
if (not os.path.exists(inferred_old_filepath)):
|
||||
error("The old package file '" + inferred_old_filepath + "' should exist but doesn't! Aborting.")
|
||||
continue
|
||||
|
||||
# 4 - If new file exists in repo already, delete it (silently) and skip
|
||||
new_filepath = os.path.join(repodir, os.path.basename(package.new.file))
|
||||
if (os.path.exists(new_filepath)):
|
||||
delete.append(package.new.file)
|
||||
continue
|
||||
|
||||
# 5 - If we've gotten here, we are a legitimate update or new package, so:
|
||||
# a) put the package in the package list
|
||||
packages.append(package)
|
||||
|
||||
# b) add new file to the "copy" list
|
||||
copy.append(package.new.file)
|
||||
|
||||
# c) add package to "dbmodify" list
|
||||
dbmodify.append(package)
|
||||
|
||||
# d) if package has an old, add old file to "delete" list
|
||||
if (not package.old == None):
|
||||
delete.append(package.old.file)
|
||||
|
||||
## IF WE HAVE HAD ANY ERRORS AT THIS POINT, ABORT! ##
|
||||
if (had_error == 1):
|
||||
error("Aborting due to errors.")
|
||||
sys.exit(-1)
|
||||
|
||||
# FOR NOW, ALWAYS EXIT, for safety
|
||||
warning("Would have succeeded in the operation.")
|
||||
warning("DBMODIFY: " + ','.join(dbmodify))
|
||||
warning("COPY: " + ','.join(copy))
|
||||
warning("DELETE: " + ','.join(delete))
|
||||
sys.exit(-1)
|
||||
|
||||
# PASS 3: EXECUTION
|
||||
#
|
||||
|
||||
# First, do all the database updates
|
||||
|
||||
for package in dbmodify:
|
||||
warning("DB: Package in dbmodify: " + package.name)
|
||||
id = db.lookup(package.name)
|
||||
if (id == None):
|
||||
db.insert(package, 3)
|
||||
else:
|
||||
db.update(id, package, 3)
|
||||
|
||||
# Copy
|
||||
for file in copy:
|
||||
retval = copyFileToRepo(file, repo_dir)
|
||||
if (retval != 0):
|
||||
error("Could not copy file to repo: '" + file + "'")
|
||||
sys.exit(-1)
|
||||
# Delete (second, for safety's sake)
|
||||
for file in delete:
|
||||
deleteFile(file)
|
||||
# Now that we've copied new files and deleted, we should delete the source
|
||||
# files, if we're supposed to
|
||||
if (switches.get("--delete") == True):
|
||||
for file in copy:
|
||||
deleteFile(file)
|
||||
|
||||
# Run updatesync where it is needed
|
||||
for package in dbmodify:
|
||||
retval = runUpdatesyncUpd(repo_dir, os.path.join(pkgbuild_dir,package.category,package.name,"PKGBUILD"))
|
||||
if (retval != 0):
|
||||
error("Updatesync upd returned an error!")
|
||||
sys.exit(-1)
|
||||
|
||||
# vim: ft=python ts=2 sw=2 noet
|
|
@ -1,579 +0,0 @@
|
|||
#!/usr/bin/python -O
|
||||
|
||||
import re
|
||||
import os
|
||||
import sys
|
||||
import pacman
|
||||
import getopt
|
||||
import MySQLdb
|
||||
import MySQLdb.connections
|
||||
import ConfigParser
|
||||
|
||||
###########################################################
|
||||
# Deal with configuration
|
||||
###########################################################
|
||||
|
||||
conffile = '/etc/tupkgs64.conf'
|
||||
|
||||
if not os.path.isfile(conffile):
|
||||
print "Error: cannot access config file ("+conffile+")"
|
||||
usage(argv[0])
|
||||
sys.exit(1)
|
||||
|
||||
config = ConfigParser.ConfigParser()
|
||||
config.read(conffile)
|
||||
|
||||
############################################################
|
||||
|
||||
# Define some classes we need
|
||||
class Version:
|
||||
def __init__(self):
|
||||
self.version = None
|
||||
self.file = None
|
||||
|
||||
class Package:
|
||||
def __init__(self):
|
||||
self.name = None
|
||||
self.category = None
|
||||
self.old = None
|
||||
self.new = None
|
||||
self.desc = None
|
||||
self.url = None
|
||||
self.depends = None
|
||||
self.sources = None
|
||||
|
||||
class PackageDatabase:
|
||||
def __init__(self, host, user, password, dbname):
|
||||
self.host = host
|
||||
self.user = user
|
||||
self.password = password
|
||||
self.dbname = dbname
|
||||
self.connection = MySQLdb.connect(host=host, user=user, passwd=password, db=dbname)
|
||||
def cursor(self):
|
||||
try:
|
||||
self.connection.ping()
|
||||
except MySQLdb.OperationalError:
|
||||
self.connection = MySQLdb.connect(host=self.host, user=self.user, passwd=self.password, db=self.dbname)
|
||||
return self.connection.cursor()
|
||||
def lookup(self, packagename):
|
||||
warning("DB: Looking up package: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT ID FROM Packages WHERE Name = '" +
|
||||
MySQLdb.escape_string(packagename) + "'")
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
return row[0]
|
||||
return None
|
||||
def getCategoryID(self, package):
|
||||
category_id = self.lookupCategory(package.category)
|
||||
if (category_id == None):
|
||||
category_id = 1
|
||||
warning("DB: Got category ID '" + str(category_id) + "' for package '" + package.name + "'")
|
||||
return category_id
|
||||
def insert(self, package, locationId):
|
||||
warning("DB: Inserting package: " + package.name)
|
||||
global repo_dir
|
||||
q = self.cursor()
|
||||
q.execute("INSERT INTO Packages " +
|
||||
"(Name, CategoryID, Version, FSPath, LocationID, SubmittedTS, Description, URL) VALUES ('" +
|
||||
MySQLdb.escape_string(package.name) + "', " +
|
||||
str(self.getCategoryID(package)) + ", '" +
|
||||
MySQLdb.escape_string(package.new.version) + "', '" +
|
||||
MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
str(locationId) + ", " +
|
||||
"UNIX_TIMESTAMP(), '" +
|
||||
MySQLdb.escape_string(str(package.desc)) + "', '" +
|
||||
MySQLdb.escape_string(str(package.url)) + "')")
|
||||
id = self.lookup(package.name)
|
||||
self.insertNewInfo(package, id, locationId)
|
||||
def update(self, id, package, locationId):
|
||||
warning("DB: Updating package: " + package.name + " with id " + str(id))
|
||||
global repo_dir
|
||||
q = self.cursor()
|
||||
if (self.isdummy(package.name)):
|
||||
q.execute("UPDATE Packages SET " +
|
||||
"Version = '" + MySQLdb.escape_string(package.new.version) + "', " +
|
||||
"CategoryID = " + str(self.getCategoryID(package)) + ", " +
|
||||
"FSPath = '" + MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
"Description = '" + MySQLdb.escape_string(str(package.desc)) + "', " +
|
||||
"DummyPkg = 0, " +
|
||||
"SubmittedTS = UNIX_TIMESTAMP(), " +
|
||||
"URL = '" + MySQLdb.escape_string(str(package.url)) + "' " +
|
||||
"WHERE ID = " + str(id))
|
||||
else:
|
||||
q.execute("UPDATE Packages SET " +
|
||||
"Version = '" + MySQLdb.escape_string(package.new.version) + "', " +
|
||||
"CategoryID = " + str(self.getCategoryID(package)) + ", " +
|
||||
"FSPath = '" + MySQLdb.escape_string(
|
||||
os.path.join(repo_dir, os.path.basename(package.new.file))) + "', " +
|
||||
"Description = '" + MySQLdb.escape_string(str(package.desc)) + "', " +
|
||||
"ModifiedTS = UNIX_TIMESTAMP(), " +
|
||||
"URL = '" + MySQLdb.escape_string(str(package.url)) + "' " +
|
||||
"WHERE ID = " + str(id))
|
||||
self.insertNewInfo(package, id, locationId)
|
||||
# we must lastly check to see if this is a move of a package from
|
||||
# unsupported to community, because we'd have to reset maintainer and location
|
||||
q = self.cursor()
|
||||
q.execute("SELECT LocationID FROM Packages WHERE ID = " + str(id))
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
if (row[0] != 3):
|
||||
q = self.cursor()
|
||||
q.execute("UPDATE Packages SET LocationID = 3, MaintainerUID = null WHERE ID = " + str(id))
|
||||
def remove(self, id, locationId):
|
||||
warning("DB: Removing package with id: " + str(id))
|
||||
q = self.cursor()
|
||||
q.execute("DELETE FROM Packages WHERE " +
|
||||
"LocationID = " + str(locationId) + " AND ID = " + str(id))
|
||||
def clearOldInfo(self, id):
|
||||
warning("DB: Clearing old info for package with id : " + str(id))
|
||||
q = self.cursor()
|
||||
q.execute("DELETE FROM PackageContents WHERE PackageID = " + str(id))
|
||||
q.execute("DELETE FROM PackageDepends WHERE PackageID = " + str(id))
|
||||
q.execute("DELETE FROM PackageSources WHERE PackageID = " + str(id))
|
||||
def lookupOrDummy(self, packagename):
|
||||
retval = self.lookup(packagename)
|
||||
if (retval != None):
|
||||
return retval
|
||||
return self.createDummy(packagename)
|
||||
def lookupCategory(self, categoryname):
|
||||
warning("DB: Looking up category: " + categoryname)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT ID from PackageCategories WHERE Category = '" + MySQLdb.escape_string(categoryname) + "'")
|
||||
if (q.rowcount != 0):
|
||||
row = q.fetchone()
|
||||
return row[0]
|
||||
return None
|
||||
def createDummy(self, packagename):
|
||||
warning("DB: Creating dummy package for: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("INSERT INTO Packages " +
|
||||
"(Name, Description, LocationID, DummyPkg) " +
|
||||
"VALUES ('" +
|
||||
MySQLdb.escape_string(packagename) + "', '" +
|
||||
MySQLdb.escape_string("A dummy package") + "', 1, 1)")
|
||||
return self.lookup(packagename)
|
||||
def insertNewInfo(self, package, id, locationId):
|
||||
q = self.cursor()
|
||||
|
||||
# first delete the old; this is never bad
|
||||
self.clearOldInfo(id)
|
||||
|
||||
warning("DB: Inserting new package info for " + package.name +
|
||||
" with id " + str(id))
|
||||
|
||||
# PackageSources
|
||||
for source in package.sources:
|
||||
q.execute("INSERT INTO PackageSources (PackageID, Source) " +
|
||||
"VALUES (" + str(id) + ", '" + MySQLdb.escape_string(source) + "')")
|
||||
# PackageDepends
|
||||
for dep in package.depends:
|
||||
depid = self.lookupOrDummy(dep)
|
||||
q.execute("INSERT INTO PackageDepends (PackageID, DepPkgID) " +
|
||||
"VALUES (" + str(id) + ", " + str(depid) + ")")
|
||||
def isdummy(self, packagename):
|
||||
warning("DB: Looking up package: " + packagename)
|
||||
q = self.cursor()
|
||||
q.execute("SELECT * FROM Packages WHERE Name = '" +
|
||||
MySQLdb.escape_string(packagename) + "' AND DummyPkg = 1")
|
||||
if (q.rowcount != 0):
|
||||
return True
|
||||
return False
|
||||
|
||||
############################################################
|
||||
# Functions for walking the file trees
|
||||
############################################################
|
||||
|
||||
def filesForRegexp(topdir, regexp):
|
||||
retval = []
|
||||
def matchfile(regexp, dirpath, namelist):
|
||||
for name in namelist:
|
||||
if (regexp.match(name)):
|
||||
retval.append(os.path.join(dirpath, name))
|
||||
os.path.walk(topdir, matchfile, regexp)
|
||||
return retval
|
||||
|
||||
def packagesInTree(topdir):
|
||||
return filesForRegexp(topdir, re.compile("^.*\.pkg\.tar\.gz$"))
|
||||
|
||||
def pkgbuildsInTree(topdir):
|
||||
return filesForRegexp(topdir, re.compile("^PKGBUILD$"))
|
||||
|
||||
############################################################
|
||||
# Function for testing if two files are identical
|
||||
############################################################
|
||||
|
||||
def areFilesIdentical(file_a, file_b):
|
||||
command = "cmp '" + file_a + "' '" + file_b + "' >/dev/null"
|
||||
retval = os.system(command)
|
||||
if (retval == 0):
|
||||
return True
|
||||
return False
|
||||
|
||||
############################################################
|
||||
# Function for fetching info from PKGBUILDs and packages
|
||||
############################################################
|
||||
|
||||
def infoFromPackageFile(filename):
|
||||
pkg = os.path.basename(filename)
|
||||
m = re.compile("(?P<pkgname>.*)-(?P<pkgver>.*)-(?P<pkgrel>.*).pkg.tar.gz").search(pkg)
|
||||
if not m:
|
||||
raise Exception("Non-standard filename")
|
||||
else:
|
||||
return m.group('pkgname'), m.group('pkgver') + "-" + m.group('pkgrel')
|
||||
|
||||
def infoFromPkgbuildFile(filename):
|
||||
# first grab the category based on the file path
|
||||
pkgdirectory = os.path.dirname(filename)
|
||||
catdirectory = os.path.dirname(pkgdirectory)
|
||||
m = re.match(r".*/([^/]+)$", catdirectory)
|
||||
if (m):
|
||||
category = m.group(1)
|
||||
else:
|
||||
category = "none"
|
||||
|
||||
# open and source the file
|
||||
pf_stdin, pf_stdout = os.popen2("/bin/bash", 't', 0)
|
||||
print >>pf_stdin, ". " + filename
|
||||
#print "PKGBUILD: " + filename
|
||||
|
||||
# get pkgname
|
||||
print >>pf_stdin, 'echo $pkgname'
|
||||
pkgname = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgname: " + pkgname
|
||||
|
||||
# get pkgver
|
||||
print >>pf_stdin, 'echo $pkgver'
|
||||
pkgver = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgver: " + pkgver
|
||||
|
||||
# get pkgrel
|
||||
print >>pf_stdin, 'echo $pkgrel'
|
||||
pkgrel = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgrel: " + pkgrel
|
||||
|
||||
# get url
|
||||
print >>pf_stdin, 'echo $url'
|
||||
url = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: url: " + url
|
||||
|
||||
# get desc
|
||||
print >>pf_stdin, 'echo $pkgdesc'
|
||||
pkgdesc = pf_stdout.readline().strip()
|
||||
#print "PKGBUILD: pkgdesc: " + pkgdesc
|
||||
|
||||
# get source array
|
||||
print >>pf_stdin, 'echo ${source[*]}'
|
||||
source = (pf_stdout.readline().strip()).split(" ")
|
||||
|
||||
# get depends array
|
||||
print >>pf_stdin, 'echo ${depends[*]}'
|
||||
depends = (pf_stdout.readline().strip()).split(" ")
|
||||
|
||||
# clean up
|
||||
pf_stdin.close()
|
||||
pf_stdout.close()
|
||||
|
||||
return pkgname, pkgver + "-" + pkgrel, pkgdesc, url, depends, source, category
|
||||
|
||||
def infoFromPkgbuildFileWorse(filename):
|
||||
# load the file with pacman library
|
||||
pkg = pacman.load(filename)
|
||||
return (pkg.name, pkg.version + "-" + pkg.release, pkg.desc,
|
||||
pkg.url, pkg.depends, pkg.source)
|
||||
|
||||
############################################################
|
||||
# Functions for doing the final steps of execution
|
||||
############################################################
|
||||
|
||||
def execute(command):
|
||||
global switches
|
||||
print(command)
|
||||
if not (switches.get("-n") == True):
|
||||
return os.system(command)
|
||||
return 0
|
||||
|
||||
def copyFileToRepo(filename, repodir):
|
||||
destfile = os.path.join(repodir, os.path.basename(filename))
|
||||
command = "cp --preserve=timestamps '" + filename + "' '" + destfile + "'"
|
||||
return execute(command)
|
||||
|
||||
def deleteFile(filename):
|
||||
command = "rm '" + filename + "'"
|
||||
return execute(command)
|
||||
|
||||
def runRepoAdd(repo, package):
|
||||
global havefakeroot
|
||||
targetDB = os.path.join(repo, "community.db.tar.gz")
|
||||
destfile = os.path.join(repo, os.path.basename(package.new.file))
|
||||
if havefakeroot:
|
||||
command = "fakeroot repo-add '" + targetDB + "' '" + destfile + "'"
|
||||
else:
|
||||
command = "repo-add '" + targetDB + "' '" + destfile + "'"
|
||||
return execute(command)
|
||||
|
||||
def runRepoRemove(repo, pkgname):
|
||||
global havefakeroot
|
||||
targetDB = os.path.join(repo, "community.db.tar.gz")
|
||||
if havefakeroot:
|
||||
command = "fakeroot repo-remove '" + targetDB + "' '"+ pkgname + "'"
|
||||
else:
|
||||
command = "repo-remove '" + targetDB + "' '" + pkgname +"'"
|
||||
return execute(command)
|
||||
|
||||
############################################################
|
||||
# Functions for error handling
|
||||
############################################################
|
||||
|
||||
def warning(string):
|
||||
print >>sys.stderr, string
|
||||
|
||||
had_error = 0
|
||||
def error(string):
|
||||
global had_error
|
||||
warning(string)
|
||||
had_error = 1
|
||||
|
||||
############################################################
|
||||
# MAIN
|
||||
############################################################
|
||||
|
||||
# ARGUMENTS
|
||||
#
|
||||
# tupkgupdate [-n] [--delete] [--paranoid] <repo_dir> <pkgbuild_dir> <build_dir>
|
||||
|
||||
# First call getopt
|
||||
switch_list,args_proper = getopt.getopt(sys.argv[1:], 'n',
|
||||
[ "delete", "paranoid" ])
|
||||
switches = {}
|
||||
for switch in switch_list:
|
||||
switches[switch[0]] = 1
|
||||
|
||||
# Then handle the remaining arguments
|
||||
if (len(args_proper) < 3):
|
||||
print >>sys.stderr, "syntax: tupkgupdate64 [-n] [--delete] [--paranoid] <repo_dir> <pkgbuild_tree> <build_tree>"
|
||||
sys.exit(-1)
|
||||
|
||||
# Make sure we can use fakeroot, warn if not
|
||||
havefakeroot = False
|
||||
if os.access('/usr/bin/fakeroot', os.X_OK):
|
||||
havefakeroot = True
|
||||
else:
|
||||
warning("Not using fakeroot for repo db generation")
|
||||
|
||||
repo_dir, pkgbuild_dir, build_dir = args_proper
|
||||
|
||||
# Open the database so we find out now if we can't!
|
||||
db = PackageDatabase(config.get('mysql', 'host'),
|
||||
config.get('mysql', 'username'),
|
||||
config.get('mysql', 'password'),
|
||||
config.get('mysql', 'db'))
|
||||
|
||||
# Set up the lists and tables
|
||||
packages = dict()
|
||||
copy = list()
|
||||
delete = list()
|
||||
|
||||
dbremove = list()
|
||||
dbmodify = list()
|
||||
|
||||
# PASS 1: PARSING/LOCATING
|
||||
#
|
||||
# A) Go through the PKGBUILD tree
|
||||
# For each PKGBUILD, create a Package with new Version containing
|
||||
# parsed version and and None for file
|
||||
|
||||
a_files = pkgbuildsInTree(pkgbuild_dir)
|
||||
for a_file in a_files:
|
||||
pkgname, ver, desc, url, depends, sources, category = infoFromPkgbuildFile(a_file)
|
||||
|
||||
# Error (and skip) if we encounter any invalid PKGBUILD files
|
||||
if (pkgname == None or ver == None):
|
||||
error("Pkgbuild '" + a_file + "' is invalid!")
|
||||
continue
|
||||
|
||||
# Error (and skip) if we encounter any duplicate package names
|
||||
# in the PKGBUILDs
|
||||
if (packages.get(pkgname)):
|
||||
error("Pkgbuild '" + a_file + "' is a duplicate!")
|
||||
continue
|
||||
|
||||
version = Version()
|
||||
version.version = ver
|
||||
version.file = None
|
||||
|
||||
package = Package()
|
||||
package.name = pkgname
|
||||
package.category = category
|
||||
package.desc = desc
|
||||
package.url = url
|
||||
package.depends = depends
|
||||
package.sources = sources
|
||||
package.new = version
|
||||
|
||||
# print "Package: desc " + desc
|
||||
|
||||
packages[pkgname] = package
|
||||
|
||||
# B) Go through the old repo dir
|
||||
# For each package file we encounter, create a Package with old
|
||||
# Version containing parsed version and filepath
|
||||
|
||||
b_files = packagesInTree(repo_dir)
|
||||
for b_file in b_files:
|
||||
pkgname, ver = infoFromPackageFile(b_file)
|
||||
|
||||
version = Version()
|
||||
version.version = ver
|
||||
version.file = b_file
|
||||
|
||||
package = packages.get(pkgname)
|
||||
if (package == None):
|
||||
package = Package()
|
||||
package.name = pkgname
|
||||
packages[pkgname] = package
|
||||
package.old = version
|
||||
|
||||
# C) Go through the build tree
|
||||
# For each package file we encounter:
|
||||
# 1 - look up the package name; if it fails, ignore the file (no error)
|
||||
# 2 - if package.new == None, ignore the package (no error)
|
||||
# 3 - if package.new.version doesn't match, then skip (no error)
|
||||
# 4 - if package.new.file == None, point it to this file
|
||||
# otherwise, log an error (and skip)
|
||||
|
||||
c_files = packagesInTree(build_dir)
|
||||
for c_file in c_files:
|
||||
pkgname, ver = infoFromPackageFile(c_file)
|
||||
|
||||
# 1
|
||||
package = packages.get(pkgname)
|
||||
if (package == None):
|
||||
continue
|
||||
|
||||
# 2
|
||||
if (package.new == None):
|
||||
continue
|
||||
|
||||
# 3
|
||||
if (package.new.version != ver):
|
||||
continue
|
||||
|
||||
# 4
|
||||
if (package.new.file == None):
|
||||
package.new.file = c_file
|
||||
continue
|
||||
else:
|
||||
error("Duplicate new file '" + c_file + "'")
|
||||
continue
|
||||
|
||||
# PASS 2: CHECKING
|
||||
#
|
||||
# Go through the package collection
|
||||
# 1 - if package has no new, place its old file on the "delete" list (and package on "dbremove")
|
||||
# 2 - if package has a new but no new.file, and old file doesn't
|
||||
# have the same version, then error (because gensync won't rebuild)
|
||||
# 3 - if package has no old, add new file to "copy" list into repo dir (and package on "dbmodify")
|
||||
# 4 - if new == old and paranoid is set, compare the files and error if not the same;
|
||||
# otherwise just skip (no update)
|
||||
# 5 - if we got here, it's a legit nontrivial new version which we allow
|
||||
# add entry to "delete" list for old file and "copy" list for
|
||||
# new file into repo dir (and package to "dbmodify")
|
||||
|
||||
for package in packages.values():
|
||||
# 1
|
||||
if (package.new == None):
|
||||
delete.append(package.old.file)
|
||||
dbremove.append(package)
|
||||
continue
|
||||
|
||||
# 2
|
||||
if (package.new.file == None):
|
||||
if (package.old == None or package.old.file == None or
|
||||
package.old.version != package.new.version):
|
||||
errstr = "No new package supplied for " + package.name + " " + package.new.version + "!"
|
||||
error(errstr)
|
||||
continue
|
||||
|
||||
# 3
|
||||
if (package.old == None):
|
||||
copy.append(package.new.file)
|
||||
dbmodify.append(package)
|
||||
continue
|
||||
|
||||
# 4
|
||||
if (package.old.version == package.new.version):
|
||||
if (switches.get("--paranoid") == True and package.new.file != None):
|
||||
if not (areFilesIdentical(package.old.file, package.new.file)):
|
||||
warning("New package file with identical version '" +
|
||||
package.new.file + "' is different than the old one:")
|
||||
if (switches.get("--delete") == True):
|
||||
warning(" Deleting the new file.")
|
||||
delete.append(package.new.file)
|
||||
else:
|
||||
warning(" Ignoring the new file.")
|
||||
continue
|
||||
|
||||
# 5
|
||||
delete.append(package.old.file)
|
||||
copy.append(package.new.file)
|
||||
dbmodify.append(package)
|
||||
continue
|
||||
|
||||
## IF WE HAVE HAD ANY ERRORS AT THIS POINT, ABORT! ##
|
||||
if (had_error == 1):
|
||||
error("Aborting due to errors.")
|
||||
sys.exit(-1)
|
||||
|
||||
# PASS 3: EXECUTION
|
||||
#
|
||||
|
||||
# First, do all the database updates
|
||||
# We won't do these for x86_64 - jason Oct 1/2006
|
||||
#for package in dbremove:
|
||||
# id = db.lookup(package.name)
|
||||
# # Note: this could remove a package from unsupported; probably want to restrict to locationId and/or non-dummy
|
||||
# if (id != None):
|
||||
# db.clearOldInfo(id)
|
||||
# db.remove(id, 3)
|
||||
#
|
||||
#for package in dbmodify:
|
||||
# warning("DB: Package in dbmodify: " + package.name)
|
||||
# id = db.lookup(package.name)
|
||||
# if (id == None):
|
||||
# db.insert(package, 3)
|
||||
# else:
|
||||
# db.update(id, package, 3)
|
||||
|
||||
# Copy
|
||||
for file in copy:
|
||||
retval = copyFileToRepo(file, repo_dir)
|
||||
if (retval != 0):
|
||||
error("Could not copy file to repo: '" + file + "'")
|
||||
sys.exit(-1)
|
||||
|
||||
# Delete (second, for safety's sake)
|
||||
for file in delete:
|
||||
deleteFile(file)
|
||||
|
||||
# Now that we've copied new files and deleted, we should delete the source
|
||||
# files, if we're supposed to
|
||||
if (switches.get("--delete") == True):
|
||||
for file in copy:
|
||||
deleteFile(file)
|
||||
|
||||
# Run updatesync where it is needed
|
||||
for package in dbremove:
|
||||
retval = runRepoRemove(repo_dir, package.name)
|
||||
if (retval != 0):
|
||||
error("repo-remove returned an error!")
|
||||
sys.exit(-1)
|
||||
for package in dbmodify:
|
||||
retval = runRepoAdd(repo_dir, package)
|
||||
if (retval != 0):
|
||||
error("repo-add returned an error!")
|
||||
sys.exit(-1)
|
||||
|
||||
# vim: ft=python ts=2 sw=2 noet
|
Loading…
Add table
Reference in a new issue