Im making a tilebased game, I have the rendering, the main engine, everything setup nicely (btw Im using LWJGL and Slick). Now this is more of a math question. How do I get the tile the mouse is on, so that I can draw a rectangle around that tile, so that I can later do stuff with that tile (e.g. if its stone Ill break it if I click...) I draw the map at an offset so the player is always in the middle of the screen, the upper left corner coordinates Ive called xot and yot, the offsets xOff and yOff, and yea. If anyone knows what math I need to use to get the propper tile x and y.
2 Answers
Converting mouse coordinates into tiles coordinates is quite simple, as long as you know the width and height of your tiles. All you need to do is, converting mouse coordinates into world coordinates, then performing simple math to get the indexes of the tile the cursor in lying on. Let's have an example.
A small grid
We have an 8x8 grid consisting of 32x32-pixel tiles:
Be careful when dealing with coordinates: the horizontal and vertical lines are still part of a given tile, as follows:
So, (0,0) is our world origin. Such a tile ends at (31,31), whilst any other multiple of 32 is the upper-left corner of a new tile. In the figure above, the coordinates (64,32) are mapped to the tile at position [2,1] in our grid. If we consider (63,32) we are instead moving to tile [1,1]. And so on.
Getting coordinates
Based on the previous, we want our cursor to map the tile at position [3,5] in our grid. Such an algorithm must convert mouse coordinates on the screen into world coordinates, and we do this by considering the current camera (or view) position in the world, and sum these values to the mouse coordinates (I'm using sort of pseudo-code):
// Convert mouse screen coords into world coords
var mouse_x = get_mouse_x();
var mouse_y = get_mouse_y();
mouse_x += camera_offset_x;
mouse_y += camera_offset_y;
At this point, your mouse_x
and mouse_y
variables are both holding the absolute coordinates in your world. A graphical explanation of this follows:
In our example, the mouse is at position (60,46) on the screen, and the current view is located at (64,128). So, doing the math, at the end it will results:
mouse_x = 124;
mouse_y = 174;
Right as it was in the very first picture.
Mapping tiles
Now, let's convert mouse coordinates in the world into tiles indexes in our grid. We want our x coordinates to refer to column 0 when x value is between 0 and 31 (included), column 1 when its value is between 32 and 63 (included), and so on. Same story for y coordinates related to the index of rows. Then, we divide the current coordinate with the width of a tile - getting a ratio value - and finally we round it to the lowest integer to get the index for each axis:
// Get tile index from world coordinates
var tile_u = 0;
var tile_v = 0;
tile_u = floor(mouse_x/TILE_WIDTH);
tile_v = floor(mouse_y/TILE_HEIGHT);
How does this work? Our mouse_x
and mouse_y
variables were holding the mouse position in the world after the first part: (124,174). The math happened here was, for x axis:
tile_u = floor(124/32) = floor(3.875) = 3;
tile_v = floor(174/32) = floor(5.4375) = 5;
Aaand here are the indexes of the tile the mouse is on:
Hope that helps.
If you have the coordinates of the mouse in pixels, the offset of the camera in pixels, and the size of the tiles in pixels, you just add the camera offset to the mouse coords, then devide by the size of tiles.
Let's say camera_offset == 60
, mouse_position == 85
and tile_size == 10
. Then (camera_offset + mouse_position) / tile_size == 14.5
or tile 15.