Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Assistance Portage & Programming
  • Search

Portage Proxy?

Problems with emerge or ebuilds? Have a basic programming question about C, PHP, Perl, BASH or something else?
Post Reply
Advanced search
25 posts • Page 1 of 1
Author
Message
PopeKetric
n00b
n00b
Posts: 3
Joined: Wed Mar 19, 2003 1:34 am

Portage Proxy?

  • Quote

Post by PopeKetric » Wed Mar 19, 2003 1:44 am

Forgive me if this has been asked before, but is there a 'documented' way of setting up a machine to proxy the distfiles for portage? I mantain several Gentoo boxes around here and would like to make one machine do the following:

Act as kind of a mirror for the portage tree. Only grabbing files when they are new, and requested by one of my local machines. So machine B asks machine a for file xyz.tbz which then downloads xyz.tbz. So when machine C asks for xyz.tbz it just grabs the file from machine A directly, without using my horribly busy net connection. :)



--Pope
Top
crown
n00b
n00b
User avatar
Posts: 64
Joined: Sat Jun 15, 2002 1:01 am

  • Quote

Post by crown » Wed Mar 19, 2003 1:58 am

As far as I know there's no direct way to do this. However if you're willing to put some work into it I think it can be done. What you can do is add Machine A to the beginning of GENTOO_MIRRORS in your make.conf with an http url like:

Code: Select all

http://address.of.machine.a/gentoo/mirror/
Then you setup apache to send all requests to that folder to a particular script which you'd write. That script would then return the requested package from the local system if it exists or else fetch it like so:

Code: Select all

emerge --fetchonly <package>
and then return it. However the request will come in something like package-version.tar.gz so how you would figure out which package exactly is being requested, I don't know. If it's clean like that example it's a no brainer but it's not always so simple.

Does any of that make sense?
Top
PopeKetric
n00b
n00b
Posts: 3
Joined: Wed Mar 19, 2003 1:34 am

  • Quote

Post by PopeKetric » Wed Mar 19, 2003 4:51 pm

Yeah, it does. I was already considering something like that. Perhaps it's time to dust off that mod_perl book and get to hacking, thanks for your help.



--Pope
Top
filoseta
n00b
n00b
Posts: 9
Joined: Sun Jun 09, 2002 12:31 am

possible solution

  • Quote

Post by filoseta » Tue May 13, 2003 10:42 pm

So I ran across your posts last night and decided that would be a nice thing to have for my network. What follows is two possible solutions. (I personally did not want to use a NFS type setup, if you are able to it would greatly ease the setup)

The first, and very high on the "hack" scale involves just modifying some defines in make.conf:

Code: Select all

FETCHCOMMAND="ssh distfiles@nada 'umask 022; /usr/bin/wget -nc -t 5 --passive-ftp \${URI} -P /home/distfiles/public_html/distfiles/' && echo \${URI} |   /bin/sed -r 's/.*\\//http:\\/\\/nada\\/~distfiles\\/distfiles\\//' |  /usr/bin/wget -t 5 --passive-ftp -P \${DISTDIR} -i -"

RESUMECOMMAND="ssh distfiles@nada 'umask 022; /usr/bin/wget -c -t 5 --passive-ftp \${URI} -P /home/distfiles/public_html/distfiles/' && echo \${URI} |   /bin/sed -r 's/.*\\//http:\\/\\/nada\\/~distfiles\\/distfiles\\//' |  /usr/bin/wget -t 5 --passive-ftp -P \${DISTDIR} -i -"

GENTOO_MIRRORS="http://nada/~distfiles <other mirrors>"
distfiles is a user on nada (my portage server) that has write permission to /home/distfiles/public_html/distfiles (my distfiles directory).

As I was going to work in auto keychain initilization and latching I decided on another tact, but this is still a very usable option. (assuming that I actually cp/pasted correctly)


My current method:

This is not yet secure enough for public network usage, I am running everything on its own locked out subnet. The security aspect is something I am still grapling with, so if anyone has any suggestions they would be welcome.

In addition to hacking the make.conf settings on your "client" systems, it is necessary to have php enabled on the webserver of the serving system.

first the client make.conf

Code: Select all

FETCHCOMMAND="sudo -H -u distfiles /usr/bin/wget -t 5 --timeout=2700 http://nada/~distfiles/getdistfile2.php?url=\${URI} -P \${DISTDIR}"

RESUMECOMMAND="sudo -H -u distfiles /usr/bin/wget -c -t 5 --timeout=2700 http://nada/~distfiles/getdistfile2.php?url=\${URI} -P \${DISTDIR}"

GENTOO_MIRRORS="http://nada/~distfiles <other mirrors>"
I will get to the purpose of the sudo in a little bit. The purpose is for security, however it is not required, you could leave it out.

Onto that php page that is getting called on nada

Code: Select all

<?
$url = $_GET['url'];

if(substr_count($_SERVER["REMOTE_ADDR"], "192.168.1.") != 1)
{
   echo "Invalid Remote Address!";
   exit;
}

$exec_string = "/usr/local/phpbin/getdistfiles ";
$exec_arg = escapeshellcmd($url);

$output = exec("$exec_string $exec_arg");

header("Location: $output");

?>
And the file that php calls. (Exists in the only php executable directory because I have php safe mode on, if you don't, backticks or shell_exec would probably be easier)

The reason I put it in a tempfile rather than pass it directly to wget is to utilize wget's url parsing engine to hopefully strip out any command arguments that might be thrown in the requesting url from a rouge client system. This could be alot more robust.

Code: Select all

#!/bin/bash
umask 022 &>/dev/null
echo "$1" > /tmp/wgetfile 2>/dev/null
/usr/bin/wget -nc -c -t 5 --passive-ftp -i /tmp/wgetfile -P /home/distfiles/public_html/distfiles &>/dev/null
/bin/rm /tmp/wgetfile 2>/dev/null
echo $1 | /bin/sed -r 's/.*\//http:\/\/nada\/~distfiles\/distfiles\//'
Ok, so I think that covers the base of it. Onto the security.

The safe mode setting in php forced the external script, that really isn't a huge deal, as that bin directory is rather protected.

Apache security. The entire distfiles directory on the webserver is protected by a very strict .htaccess file restricted by IP and digested user/pass.

The user/pass combo is the reason I am using the sudo command in the client's make.conf. That user's home directory exists as a locked down read only directory with a .wgetrc file specifying the username and password needed to access nada's distfile directory.

What bothers me the most is that I needed to give apache the permissions to read/write into a directory that is not /tmp. There are a few solutions to this, but I haven't come accross something that I am very comfortable with yet.

I think that covers it. A few closing notes. I can think of a few better ways to do this if I was willing to hack the emerge scripts, however, I don't know python and do not forsee learning it this week. Actually, with seeming minor modification to emerge the entire portage proxy would become rather simple.

A side note to a dev or someone knowledgable in the ways of emerge. Why does emerge -f need to have root permissions? Shouldn't it be happy as long as the user running it has permission to write to the distfiles directory? What am I missing?

Any input or questions are welcome, as I am sure I forgot something.
Top
Ethereal
n00b
n00b
Posts: 38
Joined: Wed Mar 19, 2003 11:13 am
Location: Russia, Moscow

  • Quote

Post by Ethereal » Wed May 14, 2003 5:47 pm

if you want only distfiles mirror then simply share your portage tree over ftp or http on (for example mashine foobar.com ) and set gentoo mirror in make.conf like GENTOO_MIRRORS="foo.bar.com gentoo.oregonstate.edu ....." .
For example
#mkdir /home/ftp/pub/portage
#/bin/mount --bind /usr/portage /home/ftp/pub/portage
(insert last string in /etc/conf.d/local.start)
so GENTOO_MIRRORS should look like
GENTOO_MIRRORS="ftp://foo.bar.com/pub/portage "
(change path if i am wrong)

Then, if you want also a rsync mirror, then go to Gentoo Docs,
You'll find there an rsync howto.
Top
filoseta
n00b
n00b
Posts: 9
Joined: Sun Jun 09, 2002 12:31 am

  • Quote

Post by filoseta » Wed May 14, 2003 9:42 pm

Right, but I think the whole idea was the desire not to use the disk space or bandwidth required of a full mirror and still have the local mirror automatically populated with everything need by the client machines.

Behind my server sit 5 gentoo boxes with different setups. It is a waste for both me and the gentoo mirrors to download the same tarballs for each machine that needs them, but I don't always know in advance what they will need and have the desire to explictly go to nada and download the necessary files first. With the above type of setup, if any machine needs a particular tarball it is automatically fetched and added to the server for use by any other of my machines.

For instance, if zero is emerging gkrellm, it requests the new tarball from nada. At this point, nada doesn't have it and fetches it from the internet, pointing zero to the new locally stored tarball. Now, if void emerges gkrellm and requests the tarball from nada, it already exists and the only traffic is internal. Consiquently, if I were to setup nada to serve just its own distfile directory (my origional setup) gkrellm would not normally exist (nada is an ultra10 w/o X) and both zero and void would individually download the tarball (unless of course I was careful to transfer the gkrellm tarball from zero to nada after noticing the fetch went to the internet b/c nada didn't have it)
Top
Nar
n00b
n00b
User avatar
Posts: 49
Joined: Mon May 27, 2002 5:51 am
Location: Sandton, South Africa

Re: possible solution

  • Quote

Post by Nar » Thu Jun 19, 2003 7:07 am

Hi

Is it posible to do it with prozilla ?

Code: Select all

FETCHCOMMAND="ssh distfiles@nada 'umask 022; /usr/bin/wget -nc -t 5 --passive-ftp \${URI} -P /home/distfiles/public_html/distfiles/' && echo \${URI} |   /bin/sed -r 's/.*\\//http:\\/\\/nada\\/~distfiles\\/distfiles\\//' |  /usr/bin/wget -t 5 --passive-ftp -P \${DISTDIR} -i -"

RESUMECOMMAND="ssh distfiles@nada 'umask 022; /usr/bin/wget -c -t 5 --passive-ftp \${URI} -P /home/distfiles/public_html/distfiles/' && echo \${URI} |   /bin/sed -r 's/.*\\//http:\\/\\/nada\\/~distfiles\\/distfiles\\//' |  /usr/bin/wget -t 5 --passive-ftp -P \${DISTDIR} -i -"

GENTOO_MIRRORS="http://nada/~distfiles <other mirrors>"
Thank you

Nar
Top
filoseta
n00b
n00b
Posts: 9
Joined: Sun Jun 09, 2002 12:31 am

Re: possible solution

  • Quote

Post by filoseta » Sat Jun 21, 2003 3:43 pm

Nar wrote:Hi

Is it posible to do it with prozilla ?

Thank you

Nar
I have never used prozilla, but as long as it has a comparable command line input options (and does not require interactive user attention) converting the settings should be trivial.

Justin
Top
CRC
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 90
Joined: Sun Mar 30, 2003 2:34 am
Location: Dallas, TX, USA
Contact:
Contact CRC
Website

Re: Portage Proxy?

  • Quote

Post by CRC » Sat Sep 06, 2003 11:26 pm

PopeKetric wrote:Forgive me if this has been asked before, but is there a 'documented' way of setting up a machine to proxy the distfiles for portage? I mantain several Gentoo boxes around here and would like to make one machine do the following:

Act as kind of a mirror for the portage tree. Only grabbing files when they are new, and requested by one of my local machines. So machine B asks machine a for file xyz.tbz which then downloads xyz.tbz. So when machine C asks for xyz.tbz it just grabs the file from machine A directly, without using my horribly busy net connection. :)

--Pope
Why bother with a proxy? Just share /usr/portage/distfiles ! Portage will handle everything else for you. That should take a couple of seconds to set up. Also, if your build system is configured for binary compatibility across your machines, have emerge build binaries (using cache and distcc) and share the binary directory too. You could even make a script that builds the package, and then uses ssh to emerge the pre-built binary on every machine.
Unix/Linux Consulting & Hosting
We Support Gentoo!
http://CoolRunningConcepts.com

Freenode: Taro!
Top
marty
n00b
n00b
Posts: 7
Joined: Sat Mar 08, 2003 10:27 pm

/usr/portage on coda or intermezzo fs

  • Quote

Post by marty » Tue Sep 09, 2003 12:18 am

You could possibly use one of the distributed replicating/synchronizing filesystems such
as coda or intermezzo to accomplish this. I haven't
done this myself yet, so this is all in theory. The
benefit would be that you would have no "server".
All the participants in the intermezzo/coda "cluster"
would be able to "emerge sync" or download new
distfiles by emerging ebuilds which would then become
available to the other machines as needed.
Top
CRC
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 90
Joined: Sun Mar 30, 2003 2:34 am
Location: Dallas, TX, USA
Contact:
Contact CRC
Website

Re: /usr/portage on coda or intermezzo fs

  • Quote

Post by CRC » Tue Sep 09, 2003 3:43 pm

marty wrote:You could possibly use one of the distributed replicating/synchronizing filesystems such
as coda or intermezzo to accomplish this. I haven't
done this myself yet, so this is all in theory. The
benefit would be that you would have no "server".
All the participants in the intermezzo/coda "cluster"
would be able to "emerge sync" or download new
distfiles by emerging ebuilds which would then become
available to the other machines as needed.
Ever look at how Intermezzo works? Its a whole lot of pain, and definately not recommended, especially for something like this. As for being able to emerge sync or emerge distfiles ... you would still have a clash if you did two syncs or downloaded two of the same distfiles at the same time. In fact, you'll get lots of issues when updating tthe same files on different machines. If you want everything synced at once, then you don't want multiple machines to touch the tree at once!!

Sharing /usr/portage/distfiles is pretty clean and easy, or just share all of /usr/portage! Have 1 system in charge of syncing. Generally in a multi-server environment, you don't want to have miscellaneous machines doing emerges anyway.

1 - Set up a single machine to be used as a test environment. This machine will be the one that does the syncs and compiles (pushing out with distcc if desired).
2 - Use the -b option to when you emerge a package to the test system to build a binary.
3 - After fully testing in the test environment, use the -GK option on machines for wish you want to install the new package.

You can actually do all of this without NFS! And its compatible with distcc. In smaller environments for which you can't afford a seperate test server or use wildly different confiurations on each machine, then just NFS share /usr/portage.

I'll be using alot of -B myself with a different make.conf to build packages for a CD - installing them into a chroot environment, making lots of small compressed and automounted filesystems (automounting them in small chunks should save ramdisk space and allow the CD to be removed when not using any of the "optional" automounted sections).
Unix/Linux Consulting & Hosting
We Support Gentoo!
http://CoolRunningConcepts.com

Freenode: Taro!
Top
marty
n00b
n00b
Posts: 7
Joined: Sat Mar 08, 2003 10:27 pm

intermezzo

  • Quote

Post by marty » Tue Sep 09, 2003 6:42 pm

Actually, I *have* investigated how intermezzo works.
I have also had it working on gentoo boxes, but opted
for rsync because I needed mirroring and not
bidirectional synchronization.

Intermezzo is admittedly not ready for large-scale
production, as there are hiccups when you attempt
to start the intersync process on conflicting caches.
But, in design it is quite an elegant solution
to implementing a distributed filesystem, hooking into
the kernel via a filesystem driver module, wrapping
an existing journalling filesystem (ie ext3) and
utilizing existing user space applications to
synchronize (ie. over HTTP[S]).

Coda, on the other hand, is more of a monolithic beast
than Intermezzo, attempting to tackle all aspects of the
distributed filesystem's functionality. However,
it is mature, stable and in production on systems worldwide.

Anyway; I would agree with you - your approach of a
centralized sync/compile node makes a lot of sense for
an environment where administration is centralized.

However, I think my proposal remains valuable in an
environment where there is no central administrator,
and administrative chores are handled by the owners
of the respective nodes. For instance, a dorm full
of gentoo boxes where each of the users actively
administrate their own machines with potentially
vastly different specs and configuration.

Also; just because something is a "pain" doesn't mean
its not worthwhile. Arguably, gentoo has given me "pains"
a few times. Fixing these problems has only helped me to
understand the inner workings of the system (and possibly helped others past the same pains). Similarly, the only
way intermezzo will totally stabilize is for people to use it,
encounter pains, and report or fix them.
Top
rfk
n00b
n00b
Posts: 13
Joined: Fri Sep 19, 2003 1:03 pm
Location: Melbourne, Australia
Contact:
Contact rfk
Website

Useful for many seperate users

  • Quote

Post by rfk » Sat Sep 20, 2003 3:16 am

I live at a residential College of 130, so we have our own fast internal LAN. Quite a few residents run gentoo systems, so we set up a simple mirroring system on one of the College's servers.

It shares out the distfiles directory over http, so it can be added to GENTOO_MIRRORS.
I wrote a wrapper script around wget that will upload the files onto this server if they didnt originate from there:

Code: Select all

#!/bin/sh
#
#  Wraps fetch utility as called by portage, so that downloaded files can be
#  subsequently uploaded to a local mirror.
#
#  Can only deal with one file at a time, but (I assume) this is all portage
#  needs.
#
 
# Specify the command and arguments used to fetch the requested file
# FETCH_ARGS must be constructed so that when invoked as:
#
#    $FETCH_CMD $FETCH_ARGS $DEST $URI
#
# the file at $URI is saved into the file $DEST.  If we are aksed to
# resume a download, the invoked command will be:
#
#    $FETCH_CMD $FETCH_RESUME $FETCH_ARGS $DEST $URI
#
FETCH_CMD="/usr/bin/wget"
FETCH_ARGS="-t 5 --passive-ftp -O "
FETCH_RESUME="-c"
 
 
#  Files will not be uploaded if $URI matches this regexp
EXCLUDE_FILES_MATCHING="whitley"
 
 
# The command to use to upload the file, and the destination for the file
# This will be invoked as:
#
#    $UPLOAD_CMD $DEST $UPLOAD_DEST
#
# where $DEST points to the downloaded file.
UPLOAD_CMD="scp -B -i /root/.ssh/gentoo_id_rsa"
UPLOAD_DEST="gentoo@gentoo.whitley.unimelb.edu.au:pub/distfiles/."
 
 
 
####  CONFIGURATION OPTIONS END HERE   ####
 
 
USAGE="$0 [-c] <URI> <DESTINATION>"
 
 
# Sanity check arguments
if [ $# -eq 3 ]
then
  if [ $1 != "-c" ]
  then
    echo "$USAGE";
    exit 1;
  fi
  URI="$2"
  DEST="$3"
else
  if [ $# -ne 2 ]
  then
    echo "$USAGE";
    exit 1;
  fi
  URI="$1"
  DEST="$2"
fi
 
 
## Do the actual download, resuming if necessary
 
if [ $# -eq 3 ]
then
  $FETCH_CMD $FETCH_RESUME $FETCH_ARGS $DEST $URI
else
  $FETCH_CMD $FETCH_ARGS $DEST $URI
fi
 
 
## Upload the file, if successful and if it is not to be excluded
 
if [ $? -eq 0 ]
then
  FNAME_TEST=`echo "$URI" | grep "$EXCLUDE_FILES_MATCHING"`
  if [ "x$FNAME_TEST" == "x" ]
  then
    echo "Uploading to local mirror"
    $UPLOAD_CMD $DEST $UPLOAD_DEST
  fi
fi

It seems to work pretty well so far. You just call this script in FETCHCOMMAND and RESUMECOMMAND in /etc/make.conf:

Code: Select all

FETCHCOMMAND="/usr/local/bin/portage-get \${URI} \${DISTDIR}/\${FILE}"
RESUMECOMMAND="/usr/local/bin/portage-get -c \${URI} \${DISTDIR}/\${FILE}"

The advantage is that each user can maitain their own setup, local distfiles etc without having to worry about it disappearing because it's network mounted. We can distribute the script and the rsa key and have new gentoo users help with our cooperative semi-mirror.

Cheers,

Ryan
Top
linkfromthepast
n00b
n00b
Posts: 23
Joined: Thu Mar 18, 2004 3:04 pm

Another Mirror Setup

  • Quote

Post by linkfromthepast » Thu Mar 18, 2004 4:21 pm

I posted this over at a different thread but it seems relevant here also. Basically another way to do the internal gentoo mirror.

http://forums.gentoo.org/viewtopic.php?t=59134
Top
Robelix
l33t
l33t
User avatar
Posts: 760
Joined: Sun Jul 21, 2002 1:01 pm
Location: in a World created by a Flying Spaghetti Monster
Contact:
Contact Robelix
Website

  • Quote

Post by Robelix » Thu Mar 18, 2004 11:23 pm

The simple way is to mount /usr/portage or /usr/portage/distfiles via nfs.
The only problem that can happen is to get a courrupt file if two machines download the same file at the same time.

Robelix
mysql> SELECT question FROM life, universe, everything WHERE answer=42;
Empty set (2079460347 sec)
Top
grudge
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 77
Joined: Sat Oct 26, 2002 4:10 pm
Location: South Africa
Contact:
Contact grudge
Website

Re: Useful for many seperate users

  • Quote

Post by grudge » Tue May 18, 2004 1:44 pm

Code: Select all

...
# where $DEST points to the downloaded file.
UPLOAD_CMD="scp -B -i /root/.ssh/gentoo_id_rsa"
...
How does one create the ssh keys for scp ?
:?:
Top
S. Traaken
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 135
Joined: Fri Nov 14, 2003 4:38 am

Re: Useful for many seperate users

  • Quote

Post by S. Traaken » Sat May 22, 2004 3:30 am

grudge wrote:How does one create the ssh keys for scp ?
:?:
ssh-keygen.

Code: Select all

ssh-keygen -t rsa
Should do it.

Then add the contents of $HOME/.ssh/id-rsa.pub to $HOME/.ssh/authorized_keys on the remote machine.

Code: Select all

ssh remote.machine.net cat \>\> .ssh/authorized_keys < .ssh/id_rsa.pub
(Read the man page for more details)
Top
hovenko
n00b
n00b
User avatar
Posts: 8
Joined: Sun Mar 14, 2004 10:59 am
Location: Oslo/Norway
Contact:
Contact hovenko
Website

squid?

  • Quote

Post by hovenko » Thu Jun 09, 2005 8:46 pm

What about a Squid-server configured to cache very large files, maximum of 100 MB or something, and automaticly delete files older than... say 100 days?
Fatrix

*Trust is a Weakness*

Acer Travelmate 803LCi
Top
mpagano
Developer
Developer
User avatar
Posts: 200
Joined: Tue Apr 27, 2004 4:23 pm
Location: USA

  • Quote

Post by mpagano » Fri Jul 01, 2005 1:41 pm

I don't think anyone has mentioned http-replicator.

http://forums.gentoo.org/viewtopic-t-17 ... art-0.html
Top
P0w3r3d
n00b
n00b
User avatar
Posts: 54
Joined: Thu Jan 13, 2005 12:45 am

  • Quote

Post by P0w3r3d » Fri Jul 01, 2005 9:33 pm

try with http-replicator... i'm using it with some changes to restrict access only to the gentoo mirrors and some other urls...
here is my patch:

Code: Select all

--- /usr/bin/http-replicator    2005-04-22 14:14:20.000000000 -0400
+++ /usr/bin/http-replicator    2005-07-01 17:19:10.000000000 -0400
@@ -7,6 +7,12 @@

 import asyncore, socket, os, time, calendar, sys, re, optparse, logging

+#
+class NotPermitedURL (Exception):
+       pass
+
 #  LISTENER
 #
 #  Class [Listener] is a subclass of [asyncore.dispatcher].
@@ -103,6 +109,7 @@
                200: 'OK',
                206: 'Partial Content',
                304: 'Not Modified',
+               401: 'Unauthorized',
                404: 'Not Found',
                416: 'Requested Range Not Satisfiable',
                503: 'Service Unavailable' }
@@ -272,6 +279,7 @@
                                self.direct = 'http://' + body['host'] + head[1].rstrip('/') # save full url for file listings
                        else:
                                self.direct = '.' # fall back on relative url is host is not sent
+

                self.counterpart = HttpServer(counterpart=self) # create counterpart

@@ -280,6 +288,29 @@
                if not response: # no server response prescribed; connect

                        try:
+                               #   <---------------------------------------------->
+                               #       verifying URLs
+                               #   <---------------------------------------------->
+                               import portage
+                               test = portage.settings["GENTOO_MIRRORS"].split()
+                               f = open('/etc/http-replicator/allows','r')
+                               line = f.readlines()
+                               self.log.error('Test : %s' % head[1])
+                               f.close()
+                               test = [l + 'distfiles' for l in test]
+                               test = test + line
+                               self.log.error('Test : %s' % test)
+                               for l in test:
+                                   desturl = self.spliturl(l)
+                                   if desturl:
+                                       desthost, destport, destpath = desturl.groups() # parse url
+
+                                   if desthost.index(host) != -1 and destpath.index(path[:path.rindex('/')]) != -1:
+                                       break
+
+                               else:
+                                   raise NotPermitedURL("Url not permited")
+                               #   <---------------------------------------------->
                                assert proxy, 'direct request to hidden or unaccessible file'
                                self.log.debug('connecting to %s', host)
                                self.counterpart.create_socket(socket.AF_INET, socket.SOCK_STREAM) # create socket
@@ -289,11 +320,18 @@
                                else: # forward request to the server
                                        assert port == 80 or port > 1024, 'illegal attempt to connect to port %i' % port # security
                                        self.counterpart.connect((host, port)) # connect to server
+                       except NotPermitedURL: # connection failed
+                               exception, value, traceback = sys.exc_info() # get exception info
+                               self.log.error('connection failed: %s', value) # log reason of failure
+                               print >> self.counterpart.data, '<html><body><pre><h1>You are not authorized to access to this URL</h1></pre></body></html>'
+                               response = 503 # service unavailable
+                               self.log.error('connection failed: %s', host)
                        except: # connection failed
                                exception, value, traceback = sys.exc_info() # get exception info
                                self.log.error('connection failed: %s', value) # log reason of failure
                                print >> self.counterpart.data, '<html><body><pre><h1>Service unavailable</h1></pre></body></html>'
                                response = 503 # service unavailable
+                               self.log.error('connection failed: %s', host)

                self.counterpart.set_header(head, body) # reconstruct header and set counterpart.sending to true


Top
TheRelevator
n00b
n00b
Posts: 36
Joined: Mon Mar 07, 2005 11:47 am

  • Quote

Post by TheRelevator » Sun Oct 16, 2005 7:54 pm

Sorry, I'm a complete n00b in these things: How do I apply these changes? I don't know where to insert them into the files... And how do I allow access only to some special gentoo mirrors?
Top
P0w3r3d
n00b
n00b
User avatar
Posts: 54
Joined: Thu Jan 13, 2005 12:45 am

  • Quote

Post by P0w3r3d » Sun Oct 16, 2005 8:16 pm

  • copy the text and paste it on a file: http-replicator.patch
  • emerge http-replicator
  • patch -p0 < http-replicator.patch
with that patch, the replicator allows connections to mirrors in GENTOO_MIRRORS and others in /etc/http-replicator/allows

to allow
http://citkit.dl.sourceforge.net/source ... .0.tar.bz2
write
http://citkit.dl.sourceforge.net/sourceforge/gaim/
in /etc/http-replicator/allows.
Top
TheRelevator
n00b
n00b
Posts: 36
Joined: Mon Mar 07, 2005 11:47 am

  • Quote

Post by TheRelevator » Mon Oct 17, 2005 3:11 pm

Thanks for your help, but it does not work:

Code: Select all

meg ~ #  patch -p0 < http-replicator.patch
patching file /usr/bin/http-replicator
patch: **** malformed patch at line 14: @@ -103,6 +109,7 @@
Top
TheRelevator
n00b
n00b
Posts: 36
Joined: Mon Mar 07, 2005 11:47 am

  • Quote

Post by TheRelevator » Mon Oct 24, 2005 5:56 pm

Does nobody have an idea what's wrong there?
Top
P0w3r3d
n00b
n00b
User avatar
Posts: 54
Joined: Thu Jan 13, 2005 12:45 am

  • Quote

Post by P0w3r3d » Mon Oct 24, 2005 11:41 pm

now i'm using torpage.. it's very powerful.. test it
Top
Post Reply

25 posts • Page 1 of 1

Return to “Portage & Programming”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic